forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
CNGkrfDhdG
Integrating Relation Dependences and Textual Semantics for Coherent Logical Reasoning over Temporal Knowledge Graph
[ "Qing Li", "Guanzhong Wu", "kaiwen wei" ]
Temporal knowledge graphs (TKGs) reflect the evolution patterns of facts, which can be summarized as logical rules and applied to forecast future facts. However, existing logical reasoning methods on TKGs face two limitations: 1) A lack of efficient strategies for extracting logical paths. 2) Insufficient utilization of structural and textual information. To bridge these gaps, we propose CoLR, a two-stage framework that mines relation dependencies and textual semantics for Coherent Logical Reasoning over TKGs. In the first stage, we construct a temporal relation structure graph (TRSG) composed of relations and cohesion weights between them. Besides, we define a novel time-fusion search graph (TFSG) along with TRSG to facilitate efficient and reliable temporal path searching. In the second stage, the textual content and timestamp sequences from these paths undergo encoding via a pre-trained language model and a time sequence encoder to accurately capture potential logical rules. Additionally, for quadruplets missing paths, historical edges sampled based on relation cohesion are used as supplements. Given the limitations of existing benchmark datasets in evaluating accuracy, generalization, and robustness, we construct three new datasets tailored to transductive, inductive, and few-shot scenarios, respectively. These datasets, combined with four real-world datasets, are employed to evaluate our model comprehensively. Experimental results demonstrate that our approach significantly outperforms existing methods across all three scenarios. Our code is available at https://anonymous.4open.science/r/CoLR-0839
[ "Temporal knowledge graph", "Knowledge graph", "Multi-hop logical rules", "Link forecasting", "Inductive reasoning" ]
Reject
https://openreview.net/pdf?id=CNGkrfDhdG
https://openreview.net/forum?id=CNGkrfDhdG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uKbrSRZnt4", "uGCl9WCAeJ", "si1X0tOIon", "q8NzyPt97f", "mWQbefAMwU", "iew4HVxIkY", "V4vawftlLB", "LN4JiLxwDk", "L259Zl6V1P", "Id7l7Ju8zH", "IUvF1n2gU1", "FvH7B4L1m4", "ASPShgqp5a", "1MhYrpciHY" ], "note_type": [ "official_comment", "decision", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review" ], "note_created": [ 1732450904601, 1737523587570, 1730652561115, 1732700205988, 1730752412096, 1732450640340, 1730373468768, 1732450850520, 1732450339325, 1732616635094, 1733207853296, 1732449475660, 1734828117824, 1730800009489 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3651/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3651/Reviewer_otAn" ], [ "ICLR.cc/2025/Conference/Submission3651/Authors" ], [ "ICLR.cc/2025/Conference/Submission3651/Reviewer_R1ug" ], [ "ICLR.cc/2025/Conference/Submission3651/Authors" ], [ "ICLR.cc/2025/Conference/Submission3651/Reviewer_pLWV" ], [ "ICLR.cc/2025/Conference/Submission3651/Authors" ], [ "ICLR.cc/2025/Conference/Submission3651/Authors" ], [ "ICLR.cc/2025/Conference/Submission3651/Reviewer_otAn" ], [ "ICLR.cc/2025/Conference/Submission3651/Reviewer_R1ug" ], [ "ICLR.cc/2025/Conference/Submission3651/Authors" ], [ "ICLR.cc/2025/Conference/Submission3651/Area_Chair_At2d" ], [ "ICLR.cc/2025/Conference/Submission3651/Reviewer_aa3E" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer pLWV Part 2\", \"comment\": \">**Q1**: For the PLM module in the joint coding model, does the authors verify the adaptability of the proposed model to different PLMs.\\n\\nOur CoLR demonstrates strong adaptability to different PLMs. In the revised manuscript, we conducted experiments on ICEWS14 using BERT, ALBERT, and MPNet as the backbone PLMs for CoLR. The results are showed in Figure 6(a). It can be observed that there are only minor performance differences among the three models, which may be attributed to variations in their sentence understanding capabilities. The differences can be further narrowed through fine-tuning. Overall, our CoLR is insensitive to the choice of PLM, exhibiting excellent flexibility.\\n\\n>**Q2**: I would like to know if it is possible to consider training the model on a single dataset and evaluating it on different test sets in the context of inductive learning. For example, after training on ICEWS14, the model's performance on ICEWS18 and ICEWS05-15 can be on par with SOTA's baseline.\\n\\nWe believe that our proposed CoLR can address the hypothesis you raised due to its strong inductive capabilities. First, CoLR captures the cohesion between relations by constructing a temporal relation structure graph. As shown in Figure 4, the structural dependency between relations remains stable across different datasets. Since CoLR makes predictions based on the logical connections between relations, which are entity-agnostic, it maintains consistent reasoning performance even when the entity sets differ. Furthermore, when new relations appear in a dataset, CoLR leverages the PLM to capture the logical associations between relations from textual semantics. As a result, CoLR can be trained on one dataset and applied to reasoning tasks on multiple datasets while maintaining stable performance. As shown in Table 5 in Appendix C.4, despite being trained on ICEWS14, CoLR outperforms the SOTA baseline ILR-IR on ICEWS18 and ICEWS05-15.\\n\\nThank you again for your feedback and suggestions. We hope that our thorough responses, along with the new results, will further underscore the value of our work. Your insights are invaluable in refining our paper. Please let us know if you have any further questions or concerns. We are committed to improving our paper and value your feedback.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The authors primarily address the shortcomings of rule-based reasoning methods in the field of temporal knowledge graph extrapolation. To this end, they propose two novel graph structures: the Time-Fusion Search Graph (TFSG) and the Temporal Relation Structure Graph (TRSG). Furthermore, they introduce the CoLR model, which employs a two-phase framework to mine relational dependencies and semantic structures within temporal knowledge graphs. The experimental results demonstrate the effectiveness of the proposed methods, validating their ability to make predictions and capture structural information in sparse data scenarios.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Innovatively Proposed the Temporal Relation Structure Graph (TRSG) Structure\\uff0cwhich effectively captures stable structural information in temporal graphs.\\n2. Treated Rules as Text Sequences. By utilizing pre-trained text sequence encoders and time series encoders for learning, neural-symbolic integrated reasoning is achieved.\\n3. In the paper, three novel datasets are introduced, accompanied by comprehensive experiments designed to rigorously validate the performance of the proposed methods from multiple perspectives.\", \"weaknesses\": \"There are some issues in the main experimental results section.\\n\\n1. The tasks in the field of temporal knowledge graph extrapolation vary. For instance, CENET uses triplet filtering for results and performs future predictions at any time point, whereas TiRGN employs quadruplet filtering for results and only predicts queries for the next time point. The authors' direct comparison of these two methods is obviously unreasonable. \\n\\n2. Additionally, the paper does not specify how the baseline results were obtained, and the provided code link is inaccessible. The authors need to specify the type of tasks performed for the results obtained using the CoLR model, as well as whether triplet or quadruplet filtering was applied to these results.\", \"questions\": \"See the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response and feedback! In response to the questions you raised regarding the dataset, we provide the following clarifications to address your concerns.\\n>The ICEWS series of datasets may face data leakage issues due to the use of external knowledge.\\n\\nWe chose the ICEWS series datasets as they are among the most widely used benchmark datasets for temporal knowledge graph reasoning. As shown in Table 1 in the manuscript, rule-based methods only reported results on ICEWS datasets. Therefore, experiments on ICEWS provide a more direct demonstration of our method's superior performance.\\n\\nWe acknowledge your concern regarding potential data leakage in the ICEWS series datasets due to our incorporation of pre-trained language models (PLMs). To address this, we have also reported our CoLR's experimental results on YAGO, another widely-used benchmark dataset, where CoLR consistently outperforms other baselines, validating the effectiveness of our proposed method. Moreover, compared to PPT, which also incorporates PLMs, our method achieves nearly double the performance improvement, suggesting that potential data leakage risks are either non-existent or minimal for PLMs. We present the comparison with PPT's on the MRR metrics as follows:\\n| Method | ICEWS14 | ICEWS18 | ICEWS05-15 |\\n|--------|---------|---------|------------|\\n| RE-GCN | 37.78 | 27.51 | 38.27 |\\n| PPT | 38.24 | 26.63 | 38.85 |\\n| CoLR | 75.72 | 68.74 | 76.82 |\\n\\nAs shown in the table, PPT, despite incorporating BERT, performs worse than the traditional baseline RE-GCN on the ICEWS18 dataset, further confirming that the impact of potential data leakage of ICEWS datasets is negligible. Additionally, to explicitly prevent data leakage risks, we constructed the ACLED2023 dataset. As described in Section 6.1, lines 353-358 of the manuscript, ACLED2023 effectively avoids data leakage by introducing events that PLMs could not have been exposed to. The experimental results on ACLED2023 demonstrate that CoLR's performance stems from the proposed method itself rather than data leakage.\\n\\n>Meanwhile, the newly constructed ACLED2023 dataset is too small and singular, making it insufficient to demonstrate the effectiveness of the method.\\n\\nWith the recent trend of leveraging PLMs and LLMs for temporal knowledge graph reasoning tasks, the knowledge contained in existing benchmark datasets may have already been learned by these language models as part of their training corpus. To mitigate the risk of data leakage affecting model performance, we constructed the ACLED2023 dataset. As shown in Table 1 of the manuscript, CoLR consistently outperforms existing baselines on this dataset, demonstrating that its superior performance is unrelated to data leakage. This also ensures the validity of CoLR's results on other datasets, which have not benefited from potential leakage. Furthermore, as illustrated in Table 4 of Appendix C.3, the scale of ACLED2023 is comparable to the conventional benchmark dataset ICEWS14. Therefore, we believe that the experimental results on ACLED2023 sufficiently demonstrate the effectiveness of CoLR. Based on your suggestion, we plan to consider expanding ACLED2023 into ACLED2024 in future work to provide a more robust evaluation of model performance.\\n\\nThank you again for your response. If you have any further questions or suggestions, please do not hesitate to let us know.\"}", "{\"summary\": \"This paper proposes a two-stage framework named CoLR for coherent logical reasoning over TKGs. The framework integrates relation dependencies and textual semantics to enhance the performance of link forecasting tasks. The key contributions include the construction of a temporal relation structure graph (TRSG) to capture structural dependencies, a time-fusion search graph (TFSG) to efficiently extract reliable temporal paths, and the encoding of textual and timestamp sequences using pre-trained language models and time sequence encoders. The authors construct three new datasets to comprehensively evaluate the model, demonstrating SOTA performance across transductive, inductive, and few-shot scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The approach of combining relation dependencies and textual semantics in a two-stage framework for TKG reasoning is novel. The TRSG and TFSG concepts, along with the path supplement strategy, are innovative contributions.\\n2. The methodology is rigorously designed and theoretically grounded. The proof for the cohesion matrix calculation and the algorithm for path extraction are well-documented.\\n3. The paper is well-structured and clearly written. Definitions, formulations, and algorithms are explained in detail, making the approach reproducible.\\n4. The proposed model achieves state-of-the-art results on multiple datasets, demonstrating its effectiveness and generalizability. The new datasets provide valuable resources for future research in this domain.\", \"weaknesses\": \"While the scalability of the approach is discussed, concrete experiments demonstrating its performance on larger TKGs are missing. The computational complexity of the TRSG construction and path extraction could become a bottleneck for very large graphs.\\nThe authors could consider including visualizations of the TRSG and TFSG to intuitively illustrate their structures and how they facilitate path extraction.\", \"questions\": \"How sensitive is the model to the choice of pre-trained language model? Have you experimented with different language models to see the impact on performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Reviewer otAn\", \"comment\": \"Thank you for investing your time and expertise in reviewing our work. We are grateful for your recognition of our contributions in conceptual innovation, theoretical justification, reproducible, and benchmark construction, and we are delighted to clarify the concerns and answer the questions you raised.\\n\\n>**Q1**:The tasks in the field of temporal knowledge graph extrapolation vary. For instance, CENET uses triplet filtering for results and performs future predictions at any time point, whereas TiRGN employs quadruplet filtering for results and only predicts queries for the next time point. The authors' direct comparison of these two methods is obviously unreasonable.\\n\\nThank you for your insightful comments regarding the comparison results. Upon careful examination, we found that the experimental setup of CENET was indeed inconsistent with other baselines, including our CoLR. To ensure a fairer comparison, we followed the experimental setup of TiRGN and re-reported CENET's results under the time-aware filtering setting. The updated experimental results can be found in Table 1. Notably, under a unified experimental setup, our CoLR achieves the best performance across all datasets, significantly outperforming the second-best baseline.\\n\\n>**Q2**: The paper does not specify how the baseline results were obtained.\\n\\nThe experimental results of all baselines on the existing benchmark datasets are taken from the highest reported results in previous papers. Considering that CENET\\u2019s experimental setup differs from other baselines, we reproduced its results on the ICEWS and YAGO datasets under the time-filtering setting for a fair comparison. For the proposed dataset, we conducted experiments for each baseline, including our CoLR, using their parameter settings from ICEWS14 and reported the results. In the revised manuscript, we have provided details about how we obtained the baseline results in line 399-408 of the experiments section.\\n\\n>**Q3**: The provided code link is inaccessible.\\n\\nThank you for your attention to our work. In the initial submitted version, the core code of our method has been open-sourced through the [anonymous code link](https://anonymous.4open.science/r/CoLR-0839) provided in the paper. We will release all our code, including the newly proposed datasets, immediately after the paper is accepted.\\n\\n>**Q4**: The authors need to specify the type of tasks performed for the results obtained using the CoLR model, as well as whether triplet or quadruplet filtering was applied to these results.\\n\\nThank you for your insight feedback. Following the TiRGN setup, we performed quadruplet filtering under the time-filtering setting and only predicted future events at the next timestamp. We have clarified our task type in Appendix C.2 of the revised version.\\n\\nThank you once again for your valuable feedback and comments! If there are any further questions or aspects you feel remain unaddressed, we are more than willing to provide additional information and clarifications as needed.\"}", "{\"summary\": \"This paper proposes a time-fusion search method based on a temporal relationship structure graph to address the problem of existing TKGR models being difficult to effectively extract logical paths from temporal KGs. Afterwards, the joint pre-trained language model and GRU model is proposed to obtain more complete logical semantics in the logical path from the perspectives of logical context and time series, effectively improving the inference quality of the TKGR model. In addition, to better validate the effectiveness of the model, this paper constructs three new datasets to measure the inference accuracy, generalization, and robustness of the TKGR model in transitive, inductive, and few shot scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.Solid theoretical analysis: The descriptions of constructing the temporal relation structure graph and reasoning path search algorithm are relatively clear.\\n\\n2.Adequate experimental validation: The effectiveness of the model in reasoning accuracy and generalization performance was verified through transitive, inductive, and few shot scenarios.\\n\\n3.The proposed new datasets expand the evaluation benchmarks in the TKGR field.\", \"weaknesses\": \"1.The description of challenge in the abstract and introduction does not correspond well. \\u201cNonetheless, the majority of paths between the subject and object......lacking direct connectivity between subject and object entities\\u201d in the introduction seems to focus more on discussing the first challenge in the abstract, and the \\\"Insufficient utilization of structural and textual information\\\" in the abstract does not provide corresponding motivation and background analysis in the introduction.\\n\\n2.The joint encoding method of time and text sequences seems a bit outdated. Although the experimental results demonstrate the effectiveness of this method, I would like to know if utilizing some more cutting-edge LLM based TKG learning models (e.g., [1], [2], [3]) can further improve the model's performance.\\n\\n[1] Wang, Jiapu, et al. \\\"Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning.\\\"\\u00a0arXiv preprint arXiv:2405.14170\\u00a0(2024).\\n\\n[2] Xia, Yuwei, et al. \\\"Enhancing temporal knowledge graph forecasting with large language models via chain-of-history reasoning.\\\"\\u00a0arXiv preprint arXiv:2402.14382\\u00a0(2024).\\n\\n[3] Luo, Ruilin, et al. \\\"Chain of history: Learning and forecasting with llms for temporal knowledge graph completion.\\\"\\u00a0arXiv preprint arXiv:2401.06072\\u00a0(2024).\\n\\n3.Omission in the experiment sections: (i)The window parameter w for the YAGO dataset was not provided. (ii)The value of \\u03b4 in Formula 3 is not explicitly given. (iii)It seems that Figure 5 in the appendix is not described in the text and has no clear indication of the experimental dataset.\\n\\n4. The quality of the presentation is below ICLR 2025 standards. For example, the format of the references should be consistent to ensure neatness and professionalism. For instance, the names of conferences should be uniformly presented either in abbreviations or full names, rather than a mixture of both.\", \"questions\": \"Besides the issues in weakness, there are a few other issues I would like to know:\\n\\n1.For the PLM module in the joint coding model, does the authors verify the adaptability of the proposed model to different PLMs.\\n\\n2.I would like to know if it is possible to consider training the model on a single dataset and evaluating it on different test sets in the context of inductive learning. For example, after training on ICEWS14, the model's performance on ICEWS18 and ICEWS05-15 can be on par with SOTA's baseline.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reveiewer pLWV Part 1\", \"comment\": \"Thank you so much for the thoughtful questions and suggestions. We hope that our response below will address your concerns.\\n>**W1**: The description of challenge in the abstract and introduction does not correspond well.\\n\\nThank you for your valuable feedback. In the revised version, we have added a motivation and background analysis for Limitation 2: the insufficient utilization of structural and textual information, in the abstract and introduction sections. For your convenience, the additional content is presented as follows:\\n\\n\\\"Furthermore, these methods insufficiently utilize the rich structural and textual information in temporal graphs. Symbolic methods approaches leverage the frequency information of relations, while neural methods focus on graph structural information. Both overlook the positive role of the structural dependencies between relations and textual semantics in logical reasoning.\\\"\\n\\n>**W2**: The joint encoding method of time and text sequences seems a bit outdated. Although the experimental results demonstrate the effectiveness of this method, I would like to know if utilizing some more cutting-edge LLM based TKG learning models (e.g., [1], [2], [3]) can further improve the model's performance.\\n\\nThanks for your suggestions. References [1-2] have attempted to integrate LLMs to enhance reasoning on temporal knowledge graphs (TKG). They focus on utilizing chain-of-thought prompting templates to guide LLMs in inferring future events from historical event chains. However, these methods are challenging for untrained LLMs, as they can only perform semantic reasoning based on limited context without explicitly leveraging temporal patterns and the underlying logical correlations between events.\\n\\nIn future work, we plan to explicitly integrate temporal logical rules with the strong reasoning capabilities of LLMs. Although Reference [3] also utilizes logical rules and LLMs, the LLMs are only involved in rule generation. In contrast, we aim to incorporate LLMs into the reasoning phase. Additionally, LLMs can be employed to interpret textual descriptions, assisting pre-trained language models in understanding textual semantics.\\n\\n>**W3**: Omission in the experiment sections: (i)The window parameter w for the YAGO dataset was not provided. (ii)The value of \\u03b4 in Formula 3 is not explicitly given. (iii)It seems that Figure 5 in the appendix is not described in the text and has no clear indication of the experimental dataset.\\n\\nThank you for your valuable feedback. We sincerely apologize for any confusion caused by our oversights. In response to your concerns, we have supplemented the experimental section in the revised version to avoid misunderstandings caused by missing details. These supplements include: (i) clarifying the setting of the time window parameter $\\\\omega$ for the YAGO dataset, (ii) providing the settings for the hyperparameter $\\\\delta$ and the margin parameter of confidence $\\\\gamma$, and (iii) adding a description in the appendix regarding Figure 5, including details about the experimental dataset and result analysis. Please refer to Appendices C.2 and C.5 in the revised version for the aforementioned modifications.\\n\\n>**W4**: The quality of the presentation is below ICLR 2025 standards. For example, the format of the references should be consistent to ensure neatness and professionalism. For instance, the names of conferences should be uniformly presented either in abbreviations or full names, rather than a mixture of both.\\n\\nThanks for your valuable feedback. We sincerely apologize for the oversight in the formatting of our reference list. We have carefully revised and adjusted the references to ensure they are consistent and well-organized. Please refer to the References section in the revised version to review our updates. Additionally, we have thoroughly reviewed the presentation and structure of the paper to ensure it meets the standards of ICLR 2025.\\n\\n**Reference:**\\n\\n[1] Xia, Yuwei, et al. \\\"Enhancing temporal knowledge graph forecasting with large language models via chain-of-history reasoning.\\\" arXiv preprint arXiv:2402.14382 (2024).\\n\\n[2] Luo, Ruilin, et al. \\\"Chain of history: Learning and forecasting with llms for temporal knowledge graph completion.\\\" arXiv preprint arXiv:2401.06072 (2024).\\n\\n[3] Wang, Jiapu, et al. \\\"Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning.\\\" arXiv preprint arXiv:2405.14170 (2024).\"}", "{\"comment\": \"We appreciate your thorough review as well as constructive feedback, and we try to answer your concerns and questions as follows.\\n>**W1**: While the scalability of the approach is discussed, concrete experiments demonstrating its performance on larger TKGs are missing. The computational complexity of the TRSG construction and path extraction could become a bottleneck for very large graphs.\\n\\nThank you very much for your valuable feedback. In the revised version, we have added efficiency analyses of TRSG and path search in Appendices A.3 and C.5, respectively.\\n\\nFor TRSG construction, larger TKGs do not significantly lead to computational bottlenecks. As shown in Equation 2, the TRSG on each subgraph can be obtained by multiplying two entity-relation matrices, which is efficiently implemented in PyTorch. Therefore, as the graph size expands, the computational cost mainly arises from the increased number of matrix multiplications due to the extension of the timestamp sequence. For a TKG with $N$ timestamps, when the time window $\\\\omega$ is 1, the times of matrix multiplications is $N$; when the time window size is $N$, the computation times is $1/2(N*(N+1))$. Thus, the time complexity for constructing a TRSG lies between $O(N)$ and $O(N^2)$. Since the time window is typically much smaller than $N$, the time complexity of constructing TRSG approaches $O(N)$. Clearly, as the number of timestamps in the TKG increases, the computational cost does not grow dramatically. For example, in the ICEWS05-15 dataset with over 4000 timestamps, constructing a TRSG with a time window of 10 takes only 20 seconds. ICEWS05-15 is the second-largest TKG benchmark dataset in terms of the number of timestamps, surpassed only by GDELT. However, we did not conduct experiments on GDELT because the textual descriptions of relations in GDELT are presented in a special coded format. Nonetheless, we believe the experimental results on ICEWS05-15 sufficiently demonstrate that our method can effectively scale to larger datasets.\\n\\nFor path extraction, by introducing TRSG and TFSG, our time-fusion path search algorithm can efficiently identify historical paths relevant to the query. Specifically, TRSG ensures that CoLR only needs to retrieve the top-K historical paths, avoiding the additional computational cost of retrieving unrelated paths. TFSG preserves temporal information while compressing the TKG into a static graph, eliminating the need for re-expanding the static graph back into a TKG. Similarly, taking ICEWS05-15 as an example, CoLR can complete the historical path search for all quadruplets within three minutes, while TLogic requires approximately 15 minutes. The detailed experimental results are shown in Table 8 of the revised version.\\n\\nIn summary, we believe that the computational complexity of TRSG construction and path retrieval will not become a bottleneck for applying CoLR to large-scale graphs.\\n\\n>**W2**: The authors could consider including visualizations of the TRSG and TFSG to intuitively illustrate their structures and how they facilitate path extraction.\\n\\nThank you very much for your suggestions. In the initial submitted version, we have done the visualizations of the TRSG and TFSG in Figures 2(d) and 3(b), respectively. To further clarify their structures and their roles in path extraction, we plan to release a demonstration video of graph construction and path search on the paper's homepage after its publication.\\n\\n>**Q1**: How sensitive is the model to the choice of pre-trained language model? Have you experimented with different language models to see the impact on performance?\\n\\nOur CoLR is insensitive to the choice of pre-trained language models. In the revised manuscript, we provided sensitivity experiments regarding CoLR's performance with different PLMs. As shown in Figure 6(a), replacing the PLM only caused minor performance differences, which can be attributed to the varying sentence understanding capabilities of different language models. The differences can be further narrowed through fine-tuning. Therefore, our CoLR is insensitive to the choice of PLM, exhibiting excellent flexibility.\\n\\nThank you once again for your valuable feedback and comments! If there are any further questions or aspects you feel remain unaddressed, we are more than willing to provide additional information and clarifications as needed.\", \"title\": \"Responses to Reviewer R1ug\"}", "{\"comment\": \"The ICEWS series of datasets may face data leakage issues due to the use of external knowledge. Meanwhile, the newly constructed ACLED2023 dataset is too small and singular, making it insufficient to demonstrate the effectiveness of the method.\"}", "{\"title\": \"Thanks\", \"comment\": \"Thank you for addressing my concerns! I think I will keep my rating.\"}", "{\"title\": \"Responses to Reviewer aa3E\", \"comment\": \"Thank you for your valuable feedback. We are greatly delighted to note your recognition of the contributions our paper makes in terms of method design, concept proposal, and benchmark construction. We are happy to address the questions you\\u2019ve raised.\\n>**W1**: The paper does not provide a detailed analysis of CoLR's computational complexity. Metrics such as runtime comparisons are not thoroughly discussed. \\n\\nIn the revised manuscript, we have provided an analysis of computational complexity and a comparison of runtime with other methods in Appendix A.3 and Appendix C.5, respectively. The primary time cost of our CoLR comes from the fine-tuning of the PLM, which is the main reason our method requires longer training time compared to other baselines. Despite our adoption of LoRA to enhance training efficiency, the training still consumes several hours. However, when using GRU as the encoder, CoLR-GRU achieves comparable training and inference efficiency with other baselines while maintaining optimal performance. Moreover, a more advantageous approach would be to use the PLM solely as an encoder to initialize entity and relation embeddings without participating in forward and backward propagation. It allows us to leverage the prior semantics of the PLM while ensuring efficient training and inference.\\n\\n>**W2**: The time-fusion path extracting process, especially with the addition of the TRSG and TFSG, might introduce computational bottlenecks. It's unclear how the proposed method scales with larger graphs or longer time windows.\\n\\nWe completely understand your concerns; however, we believe that the introduction of TRSG and TFSG will not cause a computational bottleneck for path extraction. On the contrary, TRSG and TFSG are key to improving the efficiency of path search. Specifically, TRSG ensures that CoLR only needs to retrieve the top-K historical paths, avoiding the additional computational cost of retrieving unrelated paths. TFSG preserves temporal information while compressing the TKG into a static graph, eliminating the need for re-expanding the static graph back into a TKG. Since we only need to retrieve the top-K paths most relevant to the query, the time required for path retrieval does not increase linearly with the graph size or time window. In Table 8 of the revised version, we provide comparative experiments on path search efficiency.CoLR completes historical path retrieval for all quadruplets on the ICEWS14 dataset in 40 seconds, while on ICEWS05-15, which has ten times the timestamps of ICEWS14, it only takes 3 minutes. In comparison, TLogic requires 15 minutes to complete path retrieval on ICEWS05-15.\\n\\n>**W3**: While an ablation study is presented, it might not comprehensively cover all components of the model. For example, the impact of different path lengths (L) or the number of paths (K) on performance is not explored.\\n\\nThank you very much for your professional comments. In the revised manuscript, We have provided a more comprehensive evaluation and analysis of our CoLR in Appendix C.5, including scalability assessments and sensitivity evaluations regarding time window $\\\\omega$ and the number of paths. Based on your suggestions and those of other reviewers, we have added sensitivity analyses for the pre-trained encoder and the maximum path length along with efficiency analyses on training time and path search time. Please refer to Appendix C.5 in the revised version for detailed experimental results and analysis.\\n\\n>**W4**: The framework\\u2019s reliance on cohesive relations may lead to suboptimal explanations for events connected by low-cohesion paths, affecting interpretability in sparse data scenarios.\\n\\nWe fully understand your concerns. Similar to other methods that rely solely on frequency information for path extraction, if CoLR depends only on the cohesion between relations, it may fail to explain paths with low cohesion. Therefore, we consider incorporating additional information during path extraction, such as relation frequency. Moreover, these paths can also be interpreted through PLM by leveraging textual semantic relevance. For instance, two relations that are disconnected on the graph may still be logically related in terms of textual semantics.\\n\\nThank you again for reviewing our paper and for the pleased comments. We hope that our response and clarification have addressed your questions and concerns. We sincerely invite you to engage with us if you have more questions.\"}", "{\"metareview\": [\"(a) Scientific Claims and Findings\", \"The paper introduces a novel method, CoLR, that integrates structural and textual semantics for reasoning over temporal knowledge graphs (TKGs). Key innovations include the TRSG for relation cohesion analysis and TFSG for efficient temporal path searching. Results on existing benchmarks and newly proposed datasets show state-of-the-art performance across transductive, inductive, and few-shot scenarios.\", \"(b) Strengths\", \"Novel combination of structural dependencies and textual semantics in TKG reasoning.\", \"Rigorous theoretical grounding and method design, including cohesion matrix computation and efficient path extraction.\", \"Introduction of three datasets tailored for specific reasoning challenges.\", \"(c) Weaknesses\", \"Limited scalability analysis; concerns about computational complexity were not fully addressed initially.\", \"Some experimental comparisons (e.g., CENET) were based on inconsistent setups, later corrected in the rebuttal.\", \"Concerns regarding the adaptability of methods to newer large language models.\", \"Presentation issues in earlier submissions, including inconsistent formatting and missing details in experimental descriptions.\", \"(d) Decision: reject\", \"While the paper proposes an interesting framework, unresolved concerns about scalability, reliance on older pre-trained models, and limited dataset scope undermine its contributions. Presentation issues and inconsistent experimental setups further detract from its quality. Importantly, the reviewers did not strongly support the acceptance of the paper, and two of the favorable scores both come with a low confidence level, leading to insufficient support for acceptance.\"], \"additional_comments_on_reviewer_discussion\": [\"During the rebuttal phase, the authors addressed reviewer concerns effectively:\", \"Reviewer aa3E: Concerns about computational complexity and ablation studies were addressed with added sensitivity analyses and runtime comparisons, showing efficiency improvements with TRSG/TFSG.\", \"Reviewer R1ug: Scalability concerns and adaptability to different pre-trained models were clarified, with added experiments demonstrating minor performance variation across models.\", \"Reviewer otAn: Issues with experimental setups (e.g., CENET comparisons) were corrected, and data leakage concerns in ICEWS datasets were mitigated by additional experiments on YAGO and ACLED2023.\", \"Reviewer pLWV: Presentation issues were resolved, and omitted details were clarified. The authors also highlighted the model's inductive capabilities, demonstrating stable performance across datasets in transfer learning scenarios.\", \"Despite these efforts, two reviewers (otAn, pLWV) maintained slightly below-threshold scores due to lingering concerns about dataset limitations and the use of older modeling techniques.\"]}", "{\"summary\": \"The paper introduces CoLR, a two-stage framework designed to improve logical reasoning over TKGs by integrating structural dependencies and textual semantics. The approach constructs a Temporal Relation Structure Graph to identify relations and their temporal cohesion, alongside a Time-Fusion Search Graph for reliable path searching. CoLR then encodes both the textual content and timestamp sequences with a pre-trained language model and a time sequence encoder, enhancing predictive reasoning on TKGs. Experiment results shows that CoLR significantly outperforms previous methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe CoLR framework effectively combines structural dependencies and textual semantics, which improves logical reasoning over TKGs.\\n2.\\tBy constructing new datasets tailored for specific reasoning challenges, the authors ensure a thorough evaluation of their model's capabilities in transductive, inductive, and few-shot scenarios.\\n3.\\tThe TRSG and TFSG facilitate efficient pathfinding and extraction, reducing computational costs and enabling the model to handle complex reasoning tasks.\", \"weaknesses\": \"1.\\tThe paper does not provide a detailed analysis of CoLR's computational complexity. Metrics such as runtime comparisons are not thoroughly discussed. The time-fusion path extracting process, especially with the addition of the TRSG and TFSG, might introduce computational bottlenecks. It's unclear how the proposed method scales with larger graphs or longer time windows.\\n2.\\tWhile an ablation study is presented, it might not comprehensively cover all components of the model. For example, the impact of different path lengths (L) or the number of paths (K) on performance is not explored.\\n3.\\tThe framework\\u2019s reliance on cohesive relations may lead to suboptimal explanations for events connected by low-cohesion paths, affecting interpretability in sparse data scenarios.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
CN328Aw03P
Multi-modal graph neural networks for localized off-grid weather forecasting
[ "Qidong Yang", "Jonathan Giezendanner", "Daniel Salles Civitarese", "Johannes Jakubik", "Eric Schmitt", "Anirban Chandra", "Jeremy Vila", "Detlef Hohl", "Christopher Hill", "Campbell D Watson", "Sherrie Wang" ]
Urgent applications like wildfire management and renewable energy generation require precise, localized weather forecasts near the Earth's surface. However, weather forecast products from machine learning or numerical weather models are currently generated on a global regular grid, on which a naive interpolation cannot accurately reflect fine-grained weather patterns close to the ground. In this work, we train a heterogeneous graph neural network (GNN) end-to-end to downscale gridded forecasts to off-grid locations of interest. This multi-modal GNN takes advantage of local historical weather observations (e.g., wind vector, temperature) to correct the gridded weather forecast at different lead times towards locally accurate forecasts. Each data modality is modeled as a different type of node in the graph. Using message passing, the node at the prediction location aggregates information from its heterogeneous neighbor nodes. Experiments using weather stations across the Northeastern United States show that our model outperforms a range of data-driven and non-data-driven off-grid forecasting methods. Our approach demonstrates how the gap between global large-scale weather models and locally accurate predictions can be bridged to inform localized decision-making.
[ "Weather forecasting", "Graph Neural Network", "Multi-modal", "off-grid weather forecasting", "heterogeneous graph neural network", "climate", "climate change", "sustainability" ]
Reject
https://openreview.net/pdf?id=CN328Aw03P
https://openreview.net/forum?id=CN328Aw03P
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kAHkeT3zwz", "fvMdnozKem", "aIltGLhVef", "aADd8nBcGw", "RiKnq2kaXP", "LndPOgPo9N", "KNkKNesrow", "B5sKvnYcGb" ], "note_type": [ "official_review", "official_review", "official_review", "meta_review", "decision", "official_review", "official_comment", "official_review" ], "note_created": [ 1729913320855, 1730525246946, 1730608348503, 1734712626666, 1737524161800, 1730691254316, 1731711370565, 1730718298067 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12035/Reviewer_dgLi" ], [ "ICLR.cc/2025/Conference/Submission12035/Reviewer_wNPa" ], [ "ICLR.cc/2025/Conference/Submission12035/Reviewer_y6WF" ], [ "ICLR.cc/2025/Conference/Submission12035/Area_Chair_8VU5" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12035/Reviewer_8RFz" ], [ "ICLR.cc/2025/Conference/Submission12035/Authors" ], [ "ICLR.cc/2025/Conference/Submission12035/Reviewer_Ugv9" ] ], "structured_content_str": [ "{\"summary\": \"This paper develops a multi-modal graph neural network (GNN) model to improve localized, off-grid weather forecasting by integrating global and local weather data. The model uses global ERA5 reanalysis data and local MADIS weather station observations, constructing a heterogeneous graph to leverage spatial and temporal correlations, with message-passing neural networks (MPNNs) to predict future wind conditions. It improved prediction accuracy over baseline methods for off-grid weather forecasting.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper has a clear application scenario. The abstract introduces the use case of \\u201cprecise, localized weather forecasting\\u201d right from the start, mentioning scenarios such as \\u201cfire management\\u201d and \\u201crenewable energy generation.\\u201d\\n2. The figures in the paper have a consistent style, with coordinated color schemes and clear content, making the overall presentation visually appealing.\\n3. The experiments are well-designed with reasonable spatial and temporal ranges. The ablation study investigating the impact of ERA5 inputs on model performance adds to the comprehensiveness of the experiments.\", \"weaknesses\": \"Firstly, I believe the paper's quality has not yet reached the standard of a top conference like ICLR. I suggest the author consider submitting to a more suitable venue or work towards a more impactful contribution to the topic, rather than an incremental improvement. I have shown some weakness that I can find in the paper writting aspect but the essential problem of this paper is that lacks breakthrough novelty.\\n\\n1. The Introduction section lacks a discussion of the motivation. It does not systematically explain the motivation behind the work, nor does it thoroughly compare the proposed model with previous models.\\n2. The summary of contributions at the end of the Introduction has a high degree of repetition with earlier sections and is not concise or well-summarized.\\n3. The innovative aspect of the model is restrictive. The paper only uses local observation data to adjust global forecasts, which is restrictive.\\n4. The presentation of Figure 1 is not closely connected to the context and does not provoke further thought. It is merely shown without being integrated into the discussion. The symbol representing time in Figure 2 is also a bit small.\\n5. The related work section only briefly lists some relevant works without providing a systematic or organized analysis. Specifically, in the Gridded Weather Forecasting section, only two types of machine learning methods are listed without explaining their advantages or improvements. It also fails to highlight how the proposed model builds on or innovates beyond previous models.\\n6. The Discussion and Conclusion sections are not concise enough. Some paragraphs are overly lengthy, especially in the experimental results section where certain analyses are repeated, making the content appear redundant.\\n7. The paper uses many technical terms related to weather forecasting, which may be difficult for cross-disciplinary readers to understand due to a lack of background information.\\n8. The paper does not mention the conditions for actual deployment or the hardware resources required, making it hard to replicate the results.\\n9. The paper does not discuss limitations, nor does it analyze the computational costs, which affects the credibility of the work.\\n10. The Method section lacks an analysis of the model architecture, making it difficult for readers to understand the internal structure of the model.\\n11. The paper places the model architecture diagram in the Appendix, which makes it inconvenient for readers and easy to overlook. The author should reorganize the content of the paper.\\n12. The paper only describes the meaning of each term in the equations (eqs. 6 and 10) without explaining the overall logic of the equations. There is no explanation of the internal relationships between the equations. The explanations of equations in the Method section are isolated and not interconnected.\\n13. The paper's baseline is incomplete and does not include state-of-the-art approaches.\", \"questions\": \"1. The paper selects the 5 nearest MADIS neighbors and the 8 nearest ERA5 neighbors to construct the graph structure. Why were these numbers chosen? Why did a fully connected MADIS graph observe no improvements? What about a fully connected ERA5 graph?\\n2. What is the reason for the slight reduction in error in the 48-hour prediction?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper combines global reanalysis data (ERA5) and local weather station data (MADIS) to forecast the future values of weather stations. They construct a heterogeneous graph to connect the nodes represented by weather stations and grids and achieve the forecasts using a GNN. The proposed model outperforms simple interpolation and persistence methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. It is interesting to combine local weather station data and global gridded weather data for localized weather forecasting.\\n2. It is promising to introduce machine learning methods to address the weather forecasting tasks.\", \"weaknesses\": \"1. As weather station forecasting is essentially a spatio-temporal prediction task, the introduction of GNNs and heterogeneous graph for this type of problem is not novel that has been extensively explored in prior research [1].\\n2. The validation is very weak that only a few simple interpolation and persistence methods are compared. Incorporating more powerful numerical weather prediction and machine learning methods would strengthen the evaluation. \\n3. The absence of physical constraints in the proposed model raises significant concerns about the reliability and robustness of the model.\\n4. No code and datasets are provided, making this paper difficult to replicate.\\n\\n[1] Spatio-temporal graph neural networks for predictive learning in urban computing: A survey. TKDE, 2023.\", \"questions\": \"From Figure 6, while the improvement of introducing ERA5 is limited for MLP, why is it so significant for MPNN?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper first collected a dataset that contains both global weather reanalysis (ERA5) and local weather station observations (MADIS), spanning 2019\\u20132023 and covering the Northeastern United States. Then this paper applied a heterogeneous graph model on this graph dataset and makes forecasts at each weather station. The proposed dataset off-grid station nodes\\u2019 irregular geometry and theoretically infinite spatial resolution. In summary, I consider the major contribution is the propsed dataset. However, this paper lacks in-depth analysis towards the proposed dataset, such as experiments on current SOTA baselines et al.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper compile and release a new multi-modal weather dataset incorporating both gridded ERA5 and off-grid MADIS weather stations. The dataset covers the Northeastern US from 2019\\u20132023 and includes a comprehensive list of weather variables.\\n\\n2. This paper propose a multi-modal GNN to model local weather dynamics at the station level, taking advantage of both ERA5 and weather station observations.\", \"weaknesses\": \"1. What innovative aspects does this article's model possess? Is it solely the application of heterogeneous graph networks to weather forecasting?\\n\\n2. What is the distinction between the proposed dataset and existing datasets is a key contribution of this work\\uff1fGiven that the dataset is the primary innovation, the main text should include a more comprehensive introduction and analysis of its unique characteristics.\\n\\n3. The paper lacks experiments ( such as experiments on current GNN baselines ), and the volume of experimental work falls short of the acceptance standards required by ICLR.\", \"questions\": \"Please refer to the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a multimodal graph neural network, specifically MPNN-based, pretrained on global ERA5 dataset and off-grid weather stations (MADIS) for downscaling. The target is to predict wind conditions at several local stations.\\nThe paper is well written.\\nHowever, the paper is missing many comparisons with SOTA baselines especially GNN based methods for climate and weather forecasts, and for downscaling. The comparison is only done with MLP. \\nConsidering the many gaps remaining, this paper is not yet ready for publication.\", \"additional_comments_on_reviewer_discussion\": \"There was no further discussion as the authors only replied to one reviewer.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The authors propose a forecast correction model which using MPNN as a basemodel. They use ERA5 forecasts and weather station data as input to predict wind speed at localised to Northeastern US region.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors explore the localised forecasting problem here which is indeed a crucial problem, further they treat the problem as multi modality correction problem as compared to only using ERA5 for training a forecast model.\", \"weaknesses\": \"1. The authors have only used their methodology for the correction of wind forecasts, though they have given the reasons of doing this weather forecasts collectively depends on a range of variables and it would be good to test their methodology on other weather variables as well.\\n2. In experiments MPNN is mostly compared with MLP only (which is bound to improve results), some of the SOTA forecasting models are based on Vision Transformers and Diffusion models which should be explored and added in the comparison here.\\n3. Reproducibility parameters are missing\", \"questions\": \"1. What is the computational complexity of training MPNN on multi-modal data?\\n2. In the paper authors have mentioned \\\"our approach improves predictions at longer lead times\\\". what does longer lead time mean here as the results are shown for upto 48 hours only. How does the model performs on longer lead times say 7 to 14 days?\\n3. Lacks training details for the proposed approach as well as baselines used in comparison. Also the training runtimes and number of GPUs and devices are missing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"References regarding your review\", \"comment\": \"Thank you for your detailed feedback. We would appreciate some additional clarification to help us better address your concerns and improve our manuscript.\", \"regarding_your_first_point_about_post_calibration_methods\": [\"Could you please provide specific references to the post-calibration methods you mentioned that are widely used in the community? This would help us better understand the similarities and differences with our approach, and enable us to provide a more thorough comparison.\", \"We are particularly interested in papers that demonstrate the ERA5 prediction calibration method you described using off-grid observations.\"], \"concerning_your_second_point_about_missing_baselines\": [\"We would be grateful if you could point us to the specific SOTA baselines you believe should be included in our comparison for off-grid predictions.\", \"This would ensure we can conduct a comprehensive evaluation against the most relevant and current methods in the field.\"]}", "{\"summary\": \"The paper proposed a multi-model GNN approach to fuse the global-level ERA5 weather data and station-level observation data for accurate off-grid predictions, the idea is interesting and the paper is written well to follow.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. the problem is critical and the proposal idea is interesting, namely correction instead of pure predictions may help for the off-grid forecasting\\n2. The paper is easy to follow and the results are reasonable\", \"weaknesses\": \"1. While the intuition is reasonable, the proposed solution is confused. It seems like a post-calibration method that widely used in the community. For example, instead of doing the ERA5 prediction as proposed in the paper, the prediction results can be further calibrated with the off-grid observations, in this way, the accuracy will be greatly improved. In this case, what is the difference between the proposed approach and the above method. I also would like to see the results.\\n2. Missing baselines. There are many SOTA baselines for either grid-based predictions or off-grid predictions, but the authors only compare with some basic methods, more baselines are needed to validate the effectiveness of the proposed approach.\\n3. Since the method rely on the forecasting results of ERA5, the accuracy will be affected heavily by its values. More experiments and discussion should be added.\", \"questions\": \"See the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
CN2bmVVpOh
Transformer Mechanisms Mimic Frontostriatal Gating Operations When Trained on Human Working Memory Tasks
[ "Aneri Soni", "Aaron Traylor", "Jack Merullo", "Michael Frank", "Ellie Pavlick" ]
The Transformer neural network architecture has seen success on a wide variety of tasks that appear to require executive function - the ability to represent, coordinate, and manage multiple subtasks. In cognitive neuroscience, executive function is thought to rely on sophisticated frontostriatal mechanisms for selective gating, which enable role-addressable updating-- and later readout-- of information to and from distinct "addresses" of memory, in the form of clusters of neurons. However, Transformer models have no such mechanisms intentionally built-in. It is thus an open question how Transformers solve such tasks, and whether the mechanisms that emerge to help them to do so resemble the gating mechanisms in the human brain. In this work, we analyze the mechanisms that emerge within a vanilla attention-only Transformer when trained on a task from computational cognitive neuroscience explicitly designed to place demands on working memory gating. We find that the self-attention mechanism within the Transformer develops input and output gating mechanisms, particularly when task demands require them. These gating mechanisms mirror those incorporated into earlier biologically-inspired architectures and mimic those in human studies. When learned effectively, these gating strategies support enhanced generalization and increase the models' effective capacity to store and access multiple items in memory. Despite not having memory limits, we also find that storing and accessing multiple items requires an efficient gating policy, resembling the constraints found in frontostriatal models. These results suggest opportunities for future research on computational similarities between modern AI architectures and models of the human brain.
[ "transformers; neural networks; working memory; computational neuroscience; gating; computational cognitive science; mechanistic interpretability" ]
Reject
https://openreview.net/pdf?id=CN2bmVVpOh
https://openreview.net/forum?id=CN2bmVVpOh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yrRHSk4IqX", "viRP27HH05", "oDlEfwrN0O", "k7Qy1P6Rmr", "XMqIZYWG5c", "X0hjbxIhq1", "W2i4j8ODLl", "R64uxDSS4y", "Q77YRyCEN7", "Pgnjvn5Baw", "M1RwWCVU7B", "BgTYGo42io", "8Z96c5P38f", "4kTR5glZaR" ], "note_type": [ "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1737524128443, 1732732832953, 1730475475191, 1732937645405, 1732937631033, 1732945544753, 1732787654603, 1732732533955, 1732732788003, 1730316095798, 1730610823209, 1732732806090, 1734115332718, 1733092701634 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11516/Authors" ], [ "ICLR.cc/2025/Conference/Submission11516/Reviewer_CBqC" ], [ "ICLR.cc/2025/Conference/Submission11516/Area_Chair_4d7a" ], [ "ICLR.cc/2025/Conference/Submission11516/Area_Chair_4d7a" ], [ "ICLR.cc/2025/Conference/Submission11516/Reviewer_dX8K" ], [ "ICLR.cc/2025/Conference/Submission11516/Reviewer_CBqC" ], [ "ICLR.cc/2025/Conference/Submission11516/Authors" ], [ "ICLR.cc/2025/Conference/Submission11516/Authors" ], [ "ICLR.cc/2025/Conference/Submission11516/Reviewer_2znX" ], [ "ICLR.cc/2025/Conference/Submission11516/Reviewer_dX8K" ], [ "ICLR.cc/2025/Conference/Submission11516/Authors" ], [ "ICLR.cc/2025/Conference/Submission11516/Area_Chair_4d7a" ], [ "ICLR.cc/2025/Conference/Submission11516/Reviewer_2znX" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Review\", \"comment\": \"Thank you for the review.\\nThe novelty of our work does not rely on the fact that we perform path patching on a transformer, but that it\\u2019s possible to connect the mechanism a transformer uses to solve a task specifically targeting working memory gating to a mechanism in the brain. Previous work in mechanistic interpretability aims to explain how neural networks solve some task of interest, but have no motivation to design tasks that target this domain. We borrow methods from this literature, but are not claiming to innovate on the methods themselves. Rather, the contribution is that we apply these methods to answer important questions at the intersection of AI and neuroscience. That is, the connection to cognitive neuroscience is not just a nice analogy, it is the central research question in this work.\\nWe will make clarifications to the text to better explain the task. The key/query analogy to input and output gating is abstract but is proven using some of the corresponding weights - i.e. refer to page 5 (Section 4.1 KEY AND QUERY VECTORS SPECIALIZE FOR INPUT AND OUTPUT GATING). \\u201cThat is, key vectors representing an Ignore tuple receive very little attention (0.4% of layer 1 attention averaged over test set), whereas those representing a Store tuple receive the bulk of the attention (86.8%).\\u201d Suggesting the key vector is similar to input gating due to its differential and systematic treatment of the tokens store and ignore. \\n\\u201cWe find that patching to the query vector in such cases indeed causes the attention to shift from the original stored tuple (74.1% of attention) to the stored tuple that matches the edited register, resulting in a corresponding change in the final same/different judgment.\\u201d This suggests the analogy between query and output gating due to the shift of attention to different registers as is related to output gating/pulling out information from working memory. \\nMechanistic training on small scale tasks is slightly different from curriculum learning - this is exemplified with the generalization results comparing tasks of same size, but with different underlying structure. One task allowed the model to learn a mechanistic solution and generalize - but the other task (same registers and symbols) did not. The point here was to train with a proper mechanistic task (and not just any easier task - in fact it is too easy, we conclude that it is not helpful). We will discuss more specific details of curriculum learning and incorporate this into the text. \\nWe hope to add clarifications to the neuroscience component and reduce the load on the reader trying to understand the main takeaway. \\nThe path patching methods we use are previously described in the literature(Wang et al., 2022; Goldowsky-Dill et al., 2023). We have not made new discoveries or changes to the methodology. \\nThe task will be clarified in the text and figure.\"}", "{\"summary\": \"In their paper \\u201cTransformer mechanisms mimic frontostriatal gating operations when trained on human working memory tasks\\u201d the authors train simple transformer-based networks a task used in cognitive neuroscience to study working memory. By using behavioural analysis in combination with some derivative tasks and the path-patching technique from MechInterp they show that transformers can learn to solve the task with Gating operations. They draw conceptual conclusions to human neuroscience where Frontostriatal circuits are believed to implement gating operations as part of working memory as well.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I personally enjoyed the authors investigations on identifying how task-dependent gate mechanisms develop through training and how it can be identified through behavioural investigations using thoughtfully constructed task variants. I do not necessarily find their findings surprising, but I acknowledge that having evidence for expected mechanisms is worthwhile scientific work and hence would not let that influence my rating.\", \"weaknesses\": \"Unfortunately, I do see quite major weaknesses in this paper in its current state. In the following I group my concerns into a block of neuroscience-related concerns and Mechanistic Interpretability concerns.\\n\\n*Neuroscience-related concerns*\\n\\nThe authors spend quite a bit of their manuscript making conceptual links to the brain\\u2019s gating mechanism in working memory but I struggle to see the relevance of their investigations to the brain\\u2019s working memory system, for the following reasons:\\n\\n- Using transformers for working memory investigations: The authors declare themselves that a weakness of their investigations is that transformers have access to the entire input sequence, unlike the brain. In fact, the very idea of working memory is that the immediate past needs to continually compressed into a latent state of activations or rapid connectivity changes (e.g. see Stokes 2015 TICS). I wonder why the authors would not opt to use Mamba-like State Space Models which would seem to be much closer to brain like processes while also allowing for input dependent processing, and potentially slots through the distinct hidden states in Mamba. I see that models need to be abstractions and that one can in principle study working memory with transformers, but I am not aware of any prior investigations showing such links and hence the study here would need to provide such links themselves, which brings me to my next point:\\n- Comparison to the brain is purely conceptual: Given that the authors focus on the brain so much in both their title, abstract, and introduction I would have expected to see some actual data comparisons to cognitive neuroscience data, but the authors do not seem to provide any. The only data-like comparison they have is that the model struggles more to learn the task with more working memory items which supposedly is similar to humans but that simply seems like a general task difficulty effect and does not link to gating specifically. \\n- Poorly referenced neuroscience work: The authors heavily rely on the idea of \\u2018Stripes\\u2019 in frontal cortex for working memory for their model to be relevant, as it is about recalling content from distinct memory slots. Recent neuroscientific investigations call the idea of discrete slots implemented through distinct groups of cells into question and instead think of working memory as being implemented by dynamic and distributed population codes (Meyers 2018 JNeurophys; Miller 2013 Dialogues in ClinNeuro). In case that authors make to want a serious link to neuroscience I think they should discuss how their model could be reconciled with the dominant idea of population codes. Also, the 3-4 chunks given as working memory capacity is on the lower end on the scale though I acknowledge that there is some discussion around it. Of course, the classical number to use is 7 +/- 2 (as discussed in the reference the authors give).\\n\\n*MechInterp-related concerns*\\n\\nSo, the above points are to be taken seriously to make a believable link to neuroscience. Of course, the neuroscience-link is only conceptual, and the main work of the paper actually is in the MechInterp world showing how gating mechanisms develop in trained transformers. As said in the strengths section, that seems to be an interesting analysis though I am not really well qualified to judge how new that finding is. It seems intuitive to me that this should happen and, as the authors mention themselves, models like LSTM were actually constructed specifically with such ideas in mind. If the authors see their main contribution in studying how a gating mechanism develops in transformers, then I would suggest to strongly deemphasize the neuroscience narrative and instead contextualise the research more in the context of existing MechInterp findings. For example, work like [1] from earlier in the year looks at reasoning mechanisms which at least partially seem to rely on mechanisms similar to the ones proposed in this paper and I assume there is additional other work which I am not aware of.\\n\\n[1] Brinkmann et al, 2024 https://arxiv.org/abs/2402.11917\", \"questions\": \"Major question:\\n- Do the authors see their key contribution in understanding transformers of making a strong point that transformers work like the brain? If it is the latter, I would expect a more detailed discussion of neuroscientific theories, at least a comparison of Transformers with models which are more typically considered working memory models with hidden states, and ideally some direct comparison with data from cognitive neuroscience. If it is the former, then I suggest the neuroscience link should be heavily deemphasized to not be misleading about the similarity to the brain. At the same time, I think in that case a reviewer who is well-versed in the MechInterp field should be included in the decision around acceptance. At the very least, the neuroscience content should largely be replaced by an actual overview of related MechInterp papers.\", \"minor_questions\": [\"Can you add more infos on the actual setup of the data, for example how many timesteps does one trial have? I do not find that information.\", \"Is there any control to make sure the trials in the validation set are sufficiently different from the training set? Given you use a finite set of symbols with no noise, I wonder whether there are trials identical across datasets? The probability of this is hard to judge given the information about the length of trials seems to be missing. I am sorry if it is there and I just cannot locate it.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThe authors have provided their responses. Could you please review them and share your feedback?\\n\\nThank you!\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThe authors have provided their responses. Could you please review them and share your feedback?\\n\\nThank you!\"}", "{\"comment\": \"Thank you for your thoughtful reply. I still believe that for a clear comparison between attentional head mechanisms and frontostriatal gating, a more detailed interpretability analysis is essential to fully support the claim. Such an analysis would provide a clearer understanding of how the first- and second-layer heads function and contribute to input and output gating. For instance, explicitly delineating the role of the first-layer head in \\\"input gating\\\" would enable a more precise comparison with biological mechanisms.\", \"regarding_the_potential_failure_mode\": \"The layer 2 head appears to prioritize attention to Store tuples over Ignore tuples (based on Store/Ignore information). Additionally, it seems to favor more recent Store tuples over more distant ones (informed by position embeddings). However, since Store/Ignore information and position embedding are combined nearly linearly within the layer 2 head (Elhage et al., 2021), it is possible for the position embedding to outweigh the Store/Ignore information. Based on this, my hypothesis is that the output logit difference between the correct and incorrect answers decreases as the number of consecutive Ignore tuples increases. This failure mode, if present, could significantly differentiate Transformers from RNNs, which might rely on attractor dynamics within hidden state representations to mitigate such failure mode.\"}", "{\"comment\": \"Dear authors,\\n\\nthank you so much for taking the time to respond to my review. Can I quickly clarify whether the responses you provided above are reflected in changes to the manuscript's text? If yes, could you point me to where these were clarified in the text? Thank you for your efforts!\"}", "{\"title\": \"Response to Review\", \"comment\": \"Thank you for your review.\\n\\nSeparate head analysis would be interesting and better elucidate the gating that is occurring. In this paper, we aimed to show that attentional head mechanisms could look similar to fronto-striatal gating. We sought the most simple analysis that would support this claim, acknowledging that there is room for even more fine-grained circuit analysis. \\n\\nA diagram for the Transformers\\u2019 working memory was not provided because the Transformer has no inherent memory component. As noted by the reviewer, Transformers are inherently different from biological systems because of this reason. The main reason for using Transformers to test on working memory tasks was because they have \\u201cunlimited\\u201d working memory. Prior work (Braver et al. 2008, Soni and Frank 2024) in biologically based systems show that working memory capacity limitations may emerge partly due to the learning problem: learning how to utilize working memory resources. We wanted to observe if this learning problem was also present in Transformers (despite having an \\u201cunlimited working memory\\u201d). We showed this using the difficulty the model has with the 3 register task.\\n\\n\\nOur path patching experiments extract embeddings from both heads of the first layer and send to both layers of the second layer.\\n\\nMultiple Ignore tuples are possible, and when the model has correctly identified the meaning of store and ignore, this is not a concern since the context window of the transformer is unlimited. This becomes more of a problem in biological or biologically-inspired systems.\\n\\nPath patching works by localizing effects to either the QK or OV circuits, but is the reviewer referring to analysis of these circuits through the embedding matrices (W_u*QK*W_e) like what is performed in Elhage et al., 2021? If the reviewer could explain what analysis they are interested in seeing and what specifically they think this analysis would provide, we could better respond to the request and discuss feasibility and associated tradeoffs. Generally speaking, we agree that a better characterization of individual head behaviors would help communicate how exactly the gating is implemented, though this is a sufficiently open-ended direction to reasonably be left for future work.\"}", "{\"title\": \"Response to review part 1\", \"comment\": \"Our key contribution is test whether learning difficulties and solutions that are present in humans and biological neural network models (specifically, credit assignment) can also be seen in Transformers. This is an interesting question because Transformers do not have architectural memory capacity limits, and thus if we see similarities, it suggests that these difficulties stem from more general learning principles. We establish that Transformers can learn gating-like mechanisms that can be leveraged for better generalization accuracy. Transformers are quite distinct from the brain, but we bridge the gap to utilize what we know about the brain to better train Transformers.\", \"the_length_of_each_trial_has_10_same_or_different_arbitrations\": \"this means initializations for each register and then 10 randomly sampled switches. We have 100,000 unique training examples that are used for each epoch. The validation dataset has unique trials separate from the training - this is validated at the time of data generation.\\n\\nThe reason to use Transformers despite them having access to the entire context (as opposed to mamba or other recurrent models) is deliberate: it isolates the need for WM management as opposed to maintenance. Indeed, there is growing evidence that what limits human WM capacity is not the demands on maintenance of the number of items that one can store but rather the management of WM, including binding an item to its role, which can be supported by input and output gating in biological models of frontostriatal circuits. Indeed even in these models which do have to maintain information over trials, effective WM capacity is limited by difficulties in this management problem (and learning thereof) and not the number of representations per se (Soni and Frank 2024). In this sense it is stronger to use the Transformer because it has no maintenance capacity limits at all but challenges these computational demands, to isolate the difficulties in management. Our results show that when pressured to do so by task distribution, the Transformer learns input and output gating policies which enable it to manage WM role addressability, and when it does so it can much more rapidly generalize to tasks with higher WM loads. \\n\\nWe don\\u2019t include comparisons to real brain data because (i) our focus here is on establishing the computational challenges in Transformers as motivated by those in frontostriatal networks, and (ii) it has also been established over many previous publications across species that the BG and thalamus are involved (and needed) for gating / controlling access to/from WM (Cools et al. 2007, McNab et al. 2008, Baier et al. 2010, Astle et al. 2014, Feldman et al. 2019 Wilhelm et al. 2023), So the contributions here focus on the inductive biases that the brain seems to have (input and output gating mechanisms, which interact with capacity limits) and how they relate to what Transformers do.\"}", "{\"summary\": \"This manuscript aims to understand the conditions by which interpretable gating mechanisms emerge in self-attention using attention-only transformers. The authors train simple, attention-only transformers on a simple reference-back-2 task. Using path-patching approach to interpret transformer mechanisms, the authors characterize what the query and key vectors do/implement during the reference-back-2 task and related control tasks. Overall, they find that the query vector accesses information from specific tokens within the context window (output gating), whereas the key vector provides the address to store an item to gate information (input gating).\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Altogether, the results form an intuitive and helpful story to improve the understanding of the transformer\\u2019s self-attention mechanism. The combination of the simplicity of the task, and the path-patching approach on the attention-only transformer are instructive tools for understanding.\", \"weaknesses\": \"Much of the paper focuses on trying to understand the mechanisms of self-attention. Besides a general concern that I find the results somewhat unsurprising, I have three other weaknesses I\\u2019d like to highlight.\\n\\n1. Novelty. There are a number of papers in mechanistic interpretability that aim to study how transformer mechanisms (including self-attention) influence output/performance via path-patching, e.g., Meng et al., 2023a, 2023b; Olsson et al., 2022, Li et al., 2023 etc. However, most the background in the paper references neuroscience theories (which is fine, but perhaps misleading as to existing work in mechanistic interpretability). A more thorough contextualization of mechanistic interpretability papers would be beneficial.\\n\\n2. Though the paper is geared towards understanding self-attention mechanism through experimentation, I found the manuscript to be difficult to understand. For example, I found the presentation of the task (both text description and figure 1) to be quite confusing. Only when I reviewed the text and figure from another paper (Rac-Lubashevsky & Frank, 2021) did I understand the task. It shouldn\\u2019t be necessary to review a prior paper to understand the experiment. I found figure 2 to be also quite dense and confusing. (I have previously found the figure presentation in Meng et al., 2022, Fig 1 to be quite instructive.) Finally, many of the analyses (e.g., Queries gate outputs; keys gate inputs) are only reported in text. Would it be possible to include a visual figure/understanding that is intuitive as to what the query and key vectors are actually computing? \\n\\n3. The pretraining result is generally unsurprising (fig 5), and is consistent with prior results in curriculum learning, which is not mentioned. \\nThis is perhaps more of a comment than an explicit suggestion, and something for the authors to consider: In some ways, I think contextualizing the work using the background of \\u2018frontostriatal gating\\u2019 mechanisms may be confusing to some readers. I understand that trying to demonstrate this biological/neural mechanism in transformers may have been the motivation to the authors, yet to a reader who is unfamiliar with neuroscience, this may not add much. The explanation of gating mechanisms via path-patching should in theory computationally sufficient, and doesn\\u2019t require reference to neuroscience (which can often be obscure).\\n\\nMeng, Kevin, David Bau, Alex Andonian, and Yonatan Belinkov. \\u201cLocating and Editing Factual Associations in GPT.\\u201d arXiv, January 13, 2023. http://arxiv.org/abs/2202.05262.\\n\\nMeng, Kevin, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. \\u201cMass-Editing Memory in a Transformer.\\u201d arXiv, August 1, 2023. http://arxiv.org/abs/2210.07229.\\n\\nOlsson, Catherine, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, et al. \\u201cIn-Context Learning and Induction Heads.\\u201d arXiv, September 23, 2022. https://doi.org/10.48550/arXiv.2209.11895.\\n\\nLi, Kenneth, Aspen K. Hopkins, David Bau, Fernanda Vi\\u00e9gas, Hanspeter Pfister, and Martin Wattenberg. \\u201cEmergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task.\\u201d arXiv, February 27, 2023. https://doi.org/10.48550/arXiv.2210.13382.\", \"questions\": \"My questions revolve around addressing the 3 primary weakness I mention above.\\n\\n1. Besides neuroscience, how does this current work relate to past and ongoing work in mechanistic interpretability? E.g., Meng et al., 2023a, 2023b; Olsson et al., 2022; Li et al., 2023. More specifically, how does the path patching approach used here compare to methods and insights in those papers?\\n\\n2. Is it possible to include additional figures/visualizations to improve understanding of the paper? More specifically, is it possible to create a clearer visual explanation of the reference-back-2 task or a diagram showing how the query and key vectors function as gating mechanisms? Since the paper aims to interpret mechanisms of transformers, the contribution of the paper hinges on its clarity and presentation.\\n\\n3. What is the relation of the final result (Fig 5) to curriculum learning?\", \"minor_question\": \"What do the authors hypothesize would happen for multiheaded transformers? What does each attention head do?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores how Transformers, when trained on human working memory tasks, develop mechanisms that relate to frontostriatal gating operations observed in human cognition. The study focuses on understanding whether Transformers can solve working memory tasks using self-attention to mimic input and output gating, which are key to human working memory function. By training a small, attention-only Transformer model on tasks requiring selective memory gating, the authors find that certain task conditions lead to the emergence of gating-like behaviors in the model. The results suggest that these emergent mechanisms enhance the model\\u2019s generalization and task performance, potentially bridging cognitive neuroscience and artificial intelligence.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Originality: The study\\u2019s focus on emergent mechanisms in Transformer models trained on working memory tasks is novel. The observation that Transformers struggle with the three-register task is surprising. This research adds a unique perspective to both AI and cognitive science by applying Transformers to a working memory framework typically explored in neuroscience.\", \"quality\": \"The study employs a simplified, attention-only Transformer, enhancing interpretability and enabling the identification of specific mechanisms within the model. The identification of Transformer mechanisms sheds light on how these models might solve working memory tasks and how specific task demands can trigger such emergent functionality. Control tasks are designed to isolate the conditions necessary for mechanistic emergence, strengthening the quality of the findings.\", \"clarity\": \"The analysis and results are clearly presented, making it accessible for broad readers.\", \"significance\": \"Understanding how Transformers solve tasks requiring working memory could substantially impact Transformers interpretability and development. By drawing parallels between Transformer mechanisms and human frontostriatal gating, this paper contributes valuable insights for interdisciplinary research. The study\\u2019s findings bridge AI with cognitive neuroscience, offering a model that informs both fields and fosters further exploration into the intersection of biological and artificial intelligent systems.\", \"weaknesses\": \"\\u2022\\tWhile the use of a small Transformer model with few heads allows for easier interpretability, the study does not fully analyze or demonstrate the distinct functions of each attention head in each layer. For instance, why exactly are two heads required per layer, and what distinct functions do each head serve? More detailed analyses of each head\\u2019s role (e.g., see below) would enhance the interpretability of the model\\u2019s mechanisms, and thus the impact of the work.\\n\\n\\u2022\\tA comparative diagram for understanding \\u201cworking memory\\u201d mechanisms between Transformers and RNNs\\u2014often used in biological working memory research\\u2014would provide useful context for readers.\\n\\n\\u2022\\tIn Figure 2, while path patching examples are shown, it is implied rather than explicated stated which layer and head are depicted, making it somewhat unclear which specific head is involved.\\n\\n\\u2022\\tFor brevity, I use x^0_i, x^1_i, x^2_i to specify the residual stream at tuple i before the first layer's output, after the first layer's output, and after the second layer's output, respectively. The layer 1 head appears to bind the components (Ins_i, Reg_i, Sym_i) into a tuple and write it to the residual stream x^1_i of Sym_i. However, it is not fully explained if the layer 1 head writes the Store or Ignore Ins_i to different directions within the residual stream. If so, it may function as part of the \\\"input gating\\\" mechanism (suggesting that the Store direction in layer 1 head\\u2019s output, but not Ignore direction, is similar to a \\\"working memory\\\" space). Further, do the two heads function similarly or differently? \\n\\n\\u2022\\tThe functions of the layer 2 head, especially how it compares Reg_i and Reg_j (based on Ins_j=Store), are not fully elucidated. A detailed explanation of how the layer 2 head compares the current Sym_i and a prior Sym_j to output same/different answers would clarify its role in \\u201coutput gating\\u201d. Do the two heads function similarly or differently? The study could use mechanistic interpretability tools, such as analyzing QK and OV circuits, or examining the geometric alignment of vectors in key, query, and value subspaces to address these gaps.\\n\\n\\u2022\\tWhile transformers can be trained on working memory tasks, they inherently differ from biological systems as they have access to all previous residual stream positions. The most suitable counterpart of \\u201cworking memory\\u201d might be x^1_i, which layer 1 head writes into and layer 2 head reads from. However, x^1_i is different for each position i, different from a true hidden-state-like \\\"working memory\\\". This divergence raises significant concerns about labeling the mechanisms as \\\"input gating\\\" and \\\"output gating\\\" or referring to Transformer mechanisms as \\\"working memory\\\". Using different terms across the manuscript, or discussing the limitations of these analogies would make the paper\\u2019s framing more precise.\\n\\n\\u2022\\tThe introduction lacks a review of previous work on comparing Transformers to brain/neuroscience (to name a few, https://arxiv.org/abs/2112.04035, https://arxiv.org/abs/2405.14992), which could more accurately depict the gap and contextualize the paper\\u2019s contributions.\", \"questions\": \"1.\\tGiven that the layer 2 head primarily computes functions linearly (Elhage et al 2021), has the author considered potential failure modes? For example, layer 2 head attends more to the Store tuples than Ignore tuples (based on layer 1 head's output?), and attend more to the most recent Store tuples than more distant Store tuples (based on position embedding?). Consider the scenario of a Store tuple followed by many Ignore tuples\\u2014could a recent Ignore tuple receive higher attention than distant Store tuples if position embedding outweighs the Store feature (due to linearity in key-query operation)? Such a failure mode would be significant, as RNNs, unlike Transformers, are less susceptible to this (assumed learned attractor dynamics within the hidden state representation).\\n\\n2.\\tCould the authors consider using mechanistic interpretability tools, such as QK and OV circuit analysis, or representational geometry mapping of key, query, and value spaces? These analyses help further clarify how the mechanisms are implemented within each head and layer.\\n\\n3. For other questions please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to review part 2\", \"comment\": \"We indeed should have situated our work in relation to other neuroscience connections to Transformers. For example, Whitington et al showed how transformers given spatial position encodings relate to models of hippocampus and place/grid cells. Ji-An et al. examine induction heads in transformers and their relation to models of episodic memory for contextual retrieval, and how these can account for position-dependence of memory recall. While each of these makes its own contribution linking transformers to episodic memory phenomena, they have not addressed this challenge in reading and writing to distinct addresses in working memory in a fashion that allows them to be retrieved when needed, as motivated by the human reference-back task.\\n\\nRegarding neural data in WM. The idea of dynamic and distributed population codes does not challenge the idea of stripes/slots (or similar). Modern cognitive science models reconcile the differences between slots and resources models, positing hybrid models wherein each slot-like entity can represent multiple items with limited resources (e.g., Nassar et al, 2018), and indeed this has been recently simulated within stripes (Soni & Frank 2024). Early versions of frontostriatal networks suggested computational utility of distributed codes within stripes (but used simpler localist representations for visualization in simpler tasks). Other models in the same family also show how \\u201cstripes\\u201d need not be anatomically pre-defined but can emerge as clusters within an RNN, but still benefit from gating from BG and thalamus to trigger transitions and support generalization (Calderon et al, 2022). Computationally, a key function of these models is that a prefrontal population acts as a pointer/role to support variable binding of its content, which also facilitates generalization (O\\u2019Reilly & Frank 2006; Kriete et al, 2013; Collins & Frank, 2013) \\u2013 but whether this is implemented in anatomical pre-defined populations or sub-spaces (and whether the representations therein are fixed point or dynamic attractors) is impertinent to the abstraction. Indeed, Lundqvist et al 2023 recently reviewed data from Miller\\u2019s lab in nonhuman primates, and proposed how prefrontal populations perform \\u201cspatial computing\\u201d, representing the abstract roles of items separate from their content. Moreover they linked beta dynamics to control processes that modulate access to working memory, and gamma band dynamics to the information content itself. Note that gating decisions by basal ganglia and thalamus themselves will trigger transient dynamics in the cortex which are often studied in the beta band. The authors found support for this spatial computing notion across multiple datasets, which is largely consistent with that predicted by frontostriatal gating.\"}", "{\"metareview\": \"This work shows that Transformers, when trained on tasks requiring working memory gating, develop input and output gating mechanisms that resemble those in human frontostriatal systems.\\n\\nThe paper is generally well-written and aims to bridge the gap between gating mechanisms in AI Transformers and frontostriatal systems in biological brains. The approach provides valuable insights into the interpretability of the self-attention mechanism in Transformers.\\n\\nHowever, all three reviewers rated the paper below the acceptance threshold due to several unresolved issues, even after the rebuttal:\\n\\nLack of comparisons with other AI architectures, such as Memba and RNNs, and insufficient justification for selecting Transformers as the focus.\\nInsufficient references and comparisons to existing works linking Transformers to neuroscience.\\nAbsence of behavioral or neuronal-level comparisons with neuroscience data from biological brains.\\n\\nThe authors are encouraged to address these concerns and include the suggestions in future submissions.\", \"additional_comments_on_reviewer_discussion\": \"All three reviewers rated the paper below the acceptance threshold due to several unresolved issues, even after the rebuttal:\\n\\nLack of comparisons with other AI architectures, such as Memba and RNNs, and insufficient justification for selecting Transformers as the focus.\\nInsufficient references and comparisons to existing works linking Transformers to neuroscience.\\nAbsence of behavioral or neuronal-level comparisons with neuroscience data from biological brains.\"}", "{\"title\": \"Reply\", \"comment\": \"Dear authors,\\n\\nThanks for your reply to my review. In my initial review, I mentioned 3 primary weaknesses with the paper: \\n\\n1. Novelty in relation to prior work in mechanistic interpretability\\n2. Clarity of the paper, particularly the description of the task and how it addresses the main problem/domain in this study\\n3. The relevance to current findings in relation to curriculum learning in figure 5. \\n\\nThough I am sympathetic to the arguments made by the authors, i.e., that 1) the work here addresses central questions in cog neuro, and 2) the work in mechanistic interpretability does not address the domain of topic in this paper (e.g., working memory and gating), it was not immediately clear to me that they have addressed this concern in the manuscript (since no revisions were attached), and if they were, how they might address it.\", \"regarding_clarity_of_the_task\": \"The authors mention that they will improve and revise the clarity, but do not specifically mention how.\", \"regarding_the_relation_to_curriculum_learning_in_figure_5\": \"I am open to the authors argument, i.e., \\\"Mechanistic training on small scale tasks is slightly different from curriculum learning\\\". But I was a bit confused by their explanation -- curriculum learning, as I understand it, encompasses a broad suite of techniques aimed to embed useful representations into models through tasks (not exclusively simpler tasks).\\n\\nIn sum, because it was not clear how the authors were planning to more specifically address the initial concerns raised beyond mentioning they will revise the manuscript, I will keep my score.\"}" ] }
CMqOfvD3tO
Class Distribution-induced Attention Map for Open-vocabulary Semantic Segmentations
[ "Dong Un Kang", "Hayeon Kim", "Se Young Chun" ]
Open-vocabulary semantic segmentation is a challenging task that assigns seen or unseen class labels to individual pixels. While recent works with vision-language models (VLMs) have shown promising results in zero-shot semantic segmentation, they still struggle to accurately localize class-related objects. In this work, we argue that CLIP-based prior works yield patch-wise noisy class predictions while having highly correlated class distributions for each object. Then, we propose Class Distribution-induced Attention Map, dubbed CDAM, that is generated by the Jensen-Shannon divergence between class distributions of two patches that belong to the same (class) object. This CDAM can be used for open-vocabulary semantic segmentation by integrating it into the final layer of CLIP to enhance the capability to accurately localize desired classes. Our class distribution-induced attention scheme can easily work with multi-scale image patches as well as augmented text prompts for further enhancing attention maps. By exploiting class distribution, we also propose robust entropy-based background thresholding for the inference of semantic segmentation. Interestingly, the core idea of our proposed method does not conflict with other prior arts in zero-shot semantic segmentation, thus can be synergetically used together, yielding substantial improvements in performance across popular semantic segmentation benchmarks.
[ "Vision Language Model", "Dense Localization" ]
Accept (Poster)
https://openreview.net/pdf?id=CMqOfvD3tO
https://openreview.net/forum?id=CMqOfvD3tO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zYvZyT45HZ", "zXShmvgHZ5", "yC94LKQrlZ", "wBbrsRf3o7", "vQqa0wdvHQ", "s4eTiu99xf", "pkVUNHjT7M", "mbNbjvYtYJ", "lI6k1grBbI", "alsGS9uj1h", "a3rabsFK7k", "ZcRKbug9JF", "ZHYxOS9TeL", "XhZKoJpXX9", "OmQ1W78eaZ", "OguEyyLS4w", "OdC37yqHA1", "NBCn9KAy2d", "MzcYVXEdXQ", "MRAORDoZz5", "G7U2nyfi0l", "F9xiQWsafT", "F16Ltp9gNJ", "ES2teyJcZZ", "DSebLZi4zJ", "8b9f5965xh", "4Vnk59Fiz1", "2dN45OGN0G", "2R6pZFamyz" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730427956162, 1732266578798, 1732439926225, 1730815437237, 1732715147306, 1732266696858, 1732265444194, 1734996627746, 1732266308526, 1732423722412, 1732714935469, 1732264695921, 1732265553755, 1732596429933, 1732266170516, 1733181173274, 1732625642160, 1737523838267, 1732264813034, 1730524889019, 1730484249545, 1732264005358, 1729504848970, 1732266418822, 1732715848075, 1732761009452, 1732714129544, 1732464148551, 1732264430254 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7429/Reviewer_6UEY" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Reviewer_6UEY" ], [ "ICLR.cc/2025/Conference/Submission7429/Reviewer_cFei" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Area_Chair_bW5x" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Reviewer_sGKo" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Reviewer_ehJ5" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Reviewer_874d" ], [ "ICLR.cc/2025/Conference/Submission7429/Reviewer_ehJ5" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Reviewer_sGKo" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Reviewer_cFei" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ], [ "ICLR.cc/2025/Conference/Submission7429/Reviewer_874d" ], [ "ICLR.cc/2025/Conference/Submission7429/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This manuscript is dealing with Open-vocabulary semantic segmentation aiming to improve the labeling of individual pixels with both seen and unseen classes, with a focus on localization and background separation. This work leverages class distribution comparisons between patches of the same object to improve localization. By integrating their approach (CDAM) into CLIP\\u2019s final layer, the model\\u2019s ability to focus on desired classes is enhanced. CDAM also supports multi-scale image patches and augmented text prompts, improving segmentation accuracy and enabling entropy-based background thresholding. The presented results show some performance improvements on standard benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The idea of the manuscript is rather simple and straight-forward. Nevertheless, there are some things to be highlighted:\\n1. By using class distribution similarities the approach aims at enhancing the localization of objects in open-vocabulary segmentation addressing up to a point the problem of other methods when coping with the patch-wise noise in class predictions. A more reliable attention map is statistically obtained by looking at the Jensen-Shannon divergence between patch pairs belonging to the same class..\\n2. Although not new, it is good that the proposed approach is able to deal with multi-scale patches and augmented prompts making it a more versatile framework able to more precisely capture class distinctions. Along the same lines (not really new) is the entropy-based background thresholding enhancing the segmentation performance by providing a cleaner separation of relevant classes from the background.\\n3. Probably the most important contribution is the compatibility with other zero-shot segmentation approaches allowing the approach to be integrated with existing models, potentially compounding improvements without much redundancy. This adaptability offers an approach that complements, rather than competes with, prior work, making it suitable for further extensions.\", \"weaknesses\": \"There are several issues that need to be clarified.\\n1. Increased Computational Complexity. There is little discussion regarding the complexity overhear introduced, especially with the Jensen-Shannon divergence calculation and multi-scale patch analysis that might require significant computational resources, which could limit its scalability in real-time or resource-constrained applications.\\n2. There is an inherent dependency on the CLIP Model inheriting the CLIP\\u2019s limitations in terms of class diversity, image-text representation, and the specificity of semantic segmentation. Any inherent biases or limitations in the CLIP model could be amplified or remain unresolved in this framework.\\n3. The background thresholding approach is rather heuristic (also indicated by the authors) and this brings uncertainty regarding its robustness especially in the presence of highly complex scenes with ambiguous background features. Accurately setting thresholds for diverse and dynamic backgrounds could be challenging and might require extensive tuning for different environments.\\n4. It is unclear why most of the detailed analysis has been done on top of MaskCLIP and not on some of the newer approaches. I understand that the improvement is larger when compared to MaskCLIP but one would have expected to see the analysis on the more performing approaches. Some of the details are missing or the authors are treated them superficially. For example when doing the analysis of the results in Table 1 they simply indicate that the best performing approach, i.e., CaR requires high computational costs but it is unclear why this is indeed a problem given that CDAM is supposed to be added on top of it. \\n5. There is a no insight into generalization on rare or fine-grained classes. The approach emphasizes improvement in localization but may not specifically address challenges in recognizing rare or highly similar fine-grained classes, a common difficulty in open-vocabulary segmentation.\\n6. Although CDAM is designed to work alongside existing zero-shot methods, effectively combining this technique with other methods might be challenging in practice. There is practically no discussion highlighting for example whether this compatibility would likely require additional tuning and could complicate model training, implementation, and maintenance.\", \"questions\": \"The questions below are summarizing the weaknesses I've highlighted above.\\n1. How does the complexity introduced by the Jensen-Shannon divergence calculation and multi-scale patch analysis impact the model\\u2019s runtime and feasibility in real-time applications?\\n2. To what extent do CLIP\\u2019s limitations in class diversity and image-text representation influence the segmentation results of CDAM?\\n3, What specific challenges could arise when applying the heuristic entropy-based background thresholding in complex scenes with ambiguous background features? Are there data-driven approaches that could replace the heuristic thresholding method to improve robustness and adaptability?\\n4. Why did the authors focus their analysis primarily on MaskCLIP, and what advantages or disadvantages does this bring in evaluating CDAM\\u2019s effectiveness? How would CDAM\\u2019s performance and computational requirements compare if implemented on more recent segmentation models like CaR or others?\\n5. How might the CDAM approach be adapted to address challenges in recognizing rare or fine-grained classes that are crucial in open-vocabulary segmentation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate your insightful feedback and comments.\\n\\n> **[W1] There is little explanation why the Jensen-Shannon divergence is suitable for semantic segmentation. The rationality of this method should be explained clearly.**\\n\\nThe Jensen-Shannon (JS) divergence is particularly suitable to construct our CDAM due to its symmetry and bounded range, which enhance its robustness compared to other metrics, such as KL divergence and Wasserstein distance, in generating attention maps. Specifically, KL divergence is asymmetric and has an unbounded maximum value, while Wasserstein distance, although symmetric, also lacks a bounded range. Empirically, we have demonstrated that the JS divergence is more effective for open-vocabulary semantic segmentation compared to these alternatives (see our response to [W3] below with a table for further details).\\n\\n> **[W3] The JS divergence is used in the paper to obtain Attn_CDAM. How is the effect of using KL divergence and other measurement methods.**\\n\\nWe have conducted an ablation study using several baseline methods with diverse metrics such as Kullback\\u2013Leibler (KL) divergence and Wasserstein distance (WS). Table 3 demonstrates that JS divergence consistently achieves the best averaged performance across three benchmarks. We chose JS divergence due to its symmetricity and now Table 3 supports our choice of JS divergence sufficiently. We have updated these results in revised paper (Line 465-468 and Line 890-893).\\n\\n**Table 1:** Ablation study of similarity metrics for measuring the distance of class distributions over patches. WS refers to the Wasserstein distance. The evaluation is based on mIoU(\\\\%).\\n\\n|Method|Metric|VOC21|Context60|COCO-Obj|Avg.|\\n|-|-|-|-|-|-|\\n|MaskCLIP+CDAM|KL|55.7|30.4|34.3|40.1|\\n||JS|55.9|30.5|34.3|**40.2**|\\n||WS|53.0|26.7|28.5|36.1|\\n|SCLIP+CDAM|KL|58.8|30.4|34.6|**41.3**|\\n||JS|59.0|30.4|34.5|**41.3**|\\n||WS|57.2|29.0|31.4|39.2|\\n|ClearCLIP+CDAM|KL|57.4|29.5|34.3|40.4|\\n||JS|57.6|29.8|34.5|**40.6**|\\n||WS|56.9|28.6|33.4|39.6|\\n|GEM+CDAM|KL|58.9|30.5|35.1|**41.5**|\\n||JS|58.7|30.6|35.2|**41.5**|\\n||WS|58.4|29.5|34.0|40.6|\\n\\n> **[W2] There are some confusing aspects in the use of symbols in this paper, such as the final similarity map S in line 251 and the class distribution S in line 266; The image representation is unclear, for example, the meaning of S_p1 in Figure 1 has not been mentioned yet in the paper.**\\n\\nIn the revision, we have unified the terminology throughout the paper, consistently referring to it as \\\"the similarity map $S$.\\\" Additionally, $S_{P1}$ represents the class distribution at the position of patch 1 ($P1$) within the similarity map $S$. We have clarified this in the figure caption to ensure better understanding.\"}", "{\"comment\": \"I thank the authors for their comprehensive responses. While it would have been better to have these explanations in the submitted version I understand that this is not always possible. Nevertheless, adding these details definitely makes the contribution more clear. Based on the responses (also to the other reviewers) I decided to raise my score.\"}", "{\"summary\": \"The noisy patch-level prediction is rectified by class distribution-induced attention in zero-shot open-vocabulary semantic segmentation in this paper. The motivation is clear. Experimental results show the effectiveness of the proposed idea cooperating with several state-of-the-art approaches and significant performance improvement. The paper is well-written and the figures are easy to follow.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper is good enough for me as the clear motivation and consistent performance gain on several public semantic segmentation datasets when integrating the proposed Class Distribution-induced Attention Map to different SOTA methods.\\n2. The motivation is clear and illustrated well in Fig. 1.\", \"weaknesses\": \"1. In Table 1, even though CaR is a heavily computational method and CLIP-DIY uses an extra background extractor, the proposed CDAM is not integrated into CaR and CLIP-DIY, and the best performance is not achieved on VOC21.\", \"questions\": \"1. It's better to give the mIoU both for seen and unseen classes.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We are very pleased that most of your concerns have been addressed and sincerely appreciate the increase in \\bthe score. The reviewer's insights and suggestions have been invaluable in improving the clarity and quality of our work. Thank you once again for your valuable feedback.\"}", "{\"comment\": \"> **[W4] What is the inference time after adding CDAM to other methods.**\\n\\nWe have conducted two experiments (1 NVIDIA A100 GPU used): (1) a comparison of inference time with several baselines (Table 2) and (2) an analysis of the inference time for each component of our CDAM (Table 3). Considering how much CDAM can improve performance (to the level of CaR!), the additional computation time can be justified and one can still obtain the result for one image within at most 60 msec (0.06 sec), which may still suitable for many real-time applications. Note that Table 5 shows that the majority of the computational cost in CDAM arises from its multi-scale image patches ($\\\\text{Attn}\\\\_\\\\text{MS}$) and augmented text prompts (ATP). The computational overhead increases with the number of scales ($m$) for $\\\\text{Attn}_\\\\text{MS}$ and the number of classes for ATP, primarily due to the increased computational burden of computing Jensen-Shannon (JS) divergence. These results have been included in the revised paper (Line 460-464 and Line 782-804).\\n\\n**Table 2:** Inference time comparison (in seconds per image) of baseline methods with our CDAM.\\n\\n|Method|VOC21|Context60|COCO-Obj|\\n|-|-|-|-|\\n|CaR|3.497 sec|9.340 sec|12.270 sec|\\n|CLIP-DIY|0.520 sec|-|0.559 sec|\\n|MaskCLIP|0.017 sec|0.017 sec|0.017 sec|\\n|MaskCLIP+**CDAM**|0.043 sec (+0.026)|0.049 sec (+0.032)|0.051 sec (+0.034)|\\n|SCLIP|0.018 sec|0.018 sec|0.018 sec|\\n|SCLIP+**CDAM**|0.044 sec (+0.026)|0.050 sec (+0.032)|0.052 sec (+0.034)|\\n|ClearCLIP|0.017 sec|0.018 sec|0.018 sec|\\n|ClearCLIP+**CDAM**|0.044 sec (+0.027)|0.050 sec (+0.032)|0.051 sec (+0.033)|\\n|GEM|0.026 sec|0.026 sec|0.026 sec|\\n|GEM+**CDAM**|0.052 sec (+0.026)|0.059 sec (+0.033)|0.060 sec (+0.034)|\\n\\n\\n**Table 3:** Inference time (in seconds per image) for each component of CDAM. The baseline model is set as MaskCLIP, as the additional overhead introduced by CDAM is consistent across other baseline methods. Ba denotes Baseline, At-C denotes $\\\\text{Attn} _\\\\text{CDAM}$, At-M denotes $\\\\text{Attn} _\\\\text{MS}$, ATP refers to the augmented text prompts and Th-e denotes $\\\\text{Thr} _\\\\text{ent-bg}$.\\n\\n|Ba|At-C|At-M|ATP|Th-e|VOC21|Context60|COCO-Obj\\n|-|-|-|-|-|-|-|-|\\n|\\u2714| | | | |0.017 sec|0.017 sec|0.017 sec|\\n|\\u2714|\\u2714| |||0.020 sec|0.022 sec|0.024 sec|\\n|\\u2714|\\u2714|\\u2714|||0.032 sec|0.038 sec|0.040 sec|\\n|\\u2714|\\u2714|\\u2714|\\u2714||0.043 sec|0.049 sec|0.051 sec|\\n|\\u2714|\\u2714|\\u2714|\\u2714|\\u2714|0.043 sec|0.049 sec|0.051 sec|\\n\\n\\n> **[W5] The word 'food' appears twice in the Supercategory in Tab 4. An additional \\u2018()\\u2019 appeared in line 330**\\n\\nWe have resolved the issues you mentioned.\\n\\n> **[Q1] How to explain the rationality of using two hyper-parameters \\u03b1 and Thr_default to control the background thresholding in Formula 4?**\\n\\nAs mentioned in the first paragraph of Section 3.3, in an ideal case where the segmentation model performs well, the background class can be differentiated using a default threshold value ($\\\\text{Thr} _\\\\text{default}$) of 0.5. For this reason, $\\\\text{Thr} _\\\\text{defualt}$ was set to 0.5 in FreeDA (Barsellotti et al., CVPR 2024), and we have also fixed this value at 0.5 in our approach. However, to account for the potential inaccuracies of the segmentation model, Formula 4 introduces the use of $\\\\alpha$ / $\\\\text{H}(\\\\text{S}) _\\\\text{center}$ to adaptively control $\\\\text{Thr} _\\\\text{default}$. Specifically, $\\\\alpha$ serves as a scaling constant for the value of $\\\\text{H}(\\\\text{S}) _\\\\text{center}$.\"}", "{\"comment\": \"Thank you for sharing your thoughtful comments and feedback.\\n\\n> **[W1] The authors claim that CLIP-based prior works yield patch-wise noisy class predictions while having highly correlated class distributions for each object, but this lacks necessary validation. Although an analysis of CLIP and MaskCLIP is provided in the methods section to support this claim, the analysis is not sufficiently general. MaskCLIP is an earlier work, and more recent research in the field may have addressed patch-wise noisy class predictions. Thus, the argument may not be robust, and the paper's novelty compared to related works remains debatable.**\\n\\nTable 1 provides statistical evidence to support our claim over diverse recent baselines, reporting the results of the following experiment: For a given image, one patch $P_{target}$ was randomly selected and then two patches $P_{in}$ and $P_{out}$ were randomly selected from the target class region and the rest of the region, respectively. Then, we measure (1) the probability of {class prediction in $P_{target}$ is correct} and (2) the probability of {distribution similarity between $P_{target}$ and $P_{in}$ < distribution similarity between $P_{target}$ and $P_{out}$}. These results clearly support that our claim is still valid even for more recent CLIP-based prior works such as SCLIP, ClearCLIP and GEM.\\n\\nIn addition, we have conducted an ablation study to validate the effectiveness of our proposed CDAM components over various baselines including recent works such as SCLIP, ClearCLIP, and GEM. Table 2 shows that the addition of our CDAM components consistently improved open-vocabulary semantic segmentation performance without requiring additional training. \\n\\nThese new studies strengthen our claim and demonstrate the robustness and novelty of our approach over diverse baselines. The corresponding updates have been incorporated into the revised paper (Line 211-221 and Section 4.3)\\n\\n**Table 1:** Accuracy comparison of class predictions and similarity of class distributions with several CLIP-based training-free methods across datasets. Similarity of class distribution is measured using JS divergence.\\n\\n|Baseline||VOC21|Context60|COCO-Obj|Avg.|\\n|-|-|-|-|-|-|\\n|MasKCLIP|Class Prediction|56.1 $\\\\pm$ 1.17|38.4 $\\\\pm$ 0.23|27.4 $\\\\pm$ 0.46|43.0|\\n||Sim of Class Dist|70.9 $\\\\pm$ 0.44|73.1 $\\\\pm$ 0.34|69.8 $\\\\pm$ 0.42|71.0|\\n|SCLIP|Class Prediction|67.0 $\\\\pm$ 0.49|41.8 $\\\\pm$ 0.31|33.6 $\\\\pm$ 0.23|47.4|\\n||Sim of Class Dist|78.9 $\\\\pm$ 0.26|72.0 $\\\\pm$ 0.30|75.4 $\\\\pm$ 0.55|75.5|\\n|ClearCLIP|Class Prediction|70.3 $\\\\pm$ 0.51|42.7 $\\\\pm$ 0.23|36.4 $\\\\pm$ 0.19|49.9|\\n||Sim of Class Dist|76.0 $\\\\pm$ 0.58|70.5 $\\\\pm$ 0.58|71.7 $\\\\pm$ 0.48|72.3|\\n|GEM|Class Prediction|70.8 $\\\\pm$ 1.12|42.4 $\\\\pm$ 0.38|37.5 $\\\\pm$ 0.45|50.0|\\n||Sim of Class Dist|79.4 $\\\\pm$ 0.89|71.2 $\\\\pm$ 0.34|74.2 $\\\\pm$ 0.20|74.8|\\n\\n\\n**Table 2:** Ablation study on components of our CDAM with several baseline methods. We measured the performance on VOC21. ATP refers to the augmented text prompts. Ba denotes Baseline, At-C denotes $\\\\text{Attn} _\\\\text{CDAM}$, At-M denotes $\\\\text{Attn} _\\\\text{MS}$, ATP refers to the augmented text prompts and Th-e denotes $\\\\text{Thr} _\\\\text{ent-bg}$. The evaluation is based on mIoU(\\\\%).\\n\\n|Ba|At-C|At-M|ATP|Th-e|MaskCLIP|SCLIP|ClearCLIP|GEM|\\n|-|-|-|-|-|-|-|-|-|\\n|\\u2714| | | | |33.1|50.5|50.7|52.1|\\n|\\u2714|\\u2714| |||50.1|55.0|52.1|54.7|\\n|\\u2714|\\u2714|\\u2714|||53.7|56.9|55.8|56.5|\\n|\\u2714|\\u2714|\\u2714|\\u2714||54.7|57.2|56.0|56.9|\\n|\\u2714|\\u2714|\\u2714|\\u2714|\\u2714|**55.9**|**59.0**|**57.6**|**58.7**|\\n\\n> **[W2] Jensen-Shannon divergence is a key technique used in this paper. However, there is insufficient discussion on the necessity of using this method and why alternative techniques would be inadequate.**\\n\\nWe have conducted an ablation study using several baseline methods with diverse metrics such as Kullback\\u2013Leibler (KL) divergence and Wasserstein distance (WS). Table 3 demonstrates that JS divergence consistently achieves the best averaged performance across three benchmarks. We chose JS divergence due to its symmetricity and now Table 3 supports our choice of JS divergence sufficiently. We have updated these results in revised paper (Line 465-468 and Line 890-893).\\n\\n**Table 3:** Ablation study of similarity metrics for measuring the distance of class distributions over patches. WS refers to the Wasserstein distance. The evaluation is based on mIoU(\\\\%).\\n\\n|Method|Metric|VOC21|Context60|COCO-Obj|Avg.|\\n|-|-|-|-|-|-|\\n|MaskCLIP+CDAM|KL|55.7|30.4|34.3|40.1|\\n||JS|55.9|30.5|34.3|**40.2**|\\n||WS|53.0|26.7|28.5|36.1|\\n|SCLIP+CDAM|KL|58.8|30.4|34.6|**41.3**|\\n||JS|59.0|30.4|34.5|**41.3**|\\n||WS|57.2|29.0|31.4|39.2|\\n|ClearCLIP+CDAM|KL|57.4|29.5|34.3|40.4|\\n||JS|57.6|29.8|34.5|**40.6**|\\n||WS|56.9|28.6|33.4|39.6|\\n|GEM+CDAM|KL|58.9|30.5|35.1|**41.5**|\\n||JS|58.7|30.6|35.2|**41.5**|\\n||WS|58.4|29.5|34.0|40.6|\"}", "{\"metareview\": \"This paper tackles the problem of open-vocabulary semantic segmentation, aiming to address the limitations due to noisy patch-wise class predictions in existing CLIP-based approaches. To this end, the authors propose the class-distribution-induced attention map, generated using the Jensen-Shannon divergence between class distributions of two patches from the same object, to enhance focus on class-relevant regions without additional training. They also introduce enhancements such as multi-scale image patches, augmented text prompts, and entropy-based background thresholding to improve the performance further. Experimental results show the effectiveness of the proposed idea, which cooperates with several state-of-the-art approaches and results in significant performance improvement.\\nAll reviewers appreciated the clear motivation, reasonable idea, and comprehensive analysis/experiments. The main concerns raised by reviewers were limited novelty, unclear exposition, and missing discussion/comparisons. The authors\\u2019 detailed rebuttal addressed most of them, resulting in unanimous acceptance at the end of the discussion. AC thus recommends acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The main concerns raised by reviewers were limited novelty, unclear exposition, and missing discussion/comparisons. The authors\\u2019 rebuttal addressed most of the concerns so that all reviewers either remained positive or raised their scores after discussion. AC finds no significant issues remained for publication.\"}", "{\"comment\": \"> **[Q2.1] To what extent do CLIP\\u2019s limitations in class diversity and image-text representation influence the segmentation results of CDAM?**\\n\\nCLIP\\u2019s limitations in class diversity and image-text representation hinder segmentation performance in general, especially with rare classes in real-world scenarios. These challenges are common to training-free methods (e.g., SCLIP, ClearCLIP, GEM), as they inherit the CLIP's constraints without additional training. CDAM itself is not a segmentation method, but a method to aid other CLIP-based methods for enhancing the performance of segmentation. Thus, CDAM may be able to fullfil its goal (i.e., improvement over the original method) despite CLIP's limitations that are shared with the original method.\\n\\nHowever, CDAM won't be able to overcome CLIP's fundamental limitations. We believe that to overcome these issues, training-based approaches such as domain-specific prompt tuning and utilizing more advanced multi-modal foundation models with enhanced class diversity and stronger image-text alignment could mitigate some of these limitations. However, it\\u2019s important to note that fully addressing out-of-distribution (OOD) scenarios or achieving perfect class coverage remains challenging without extensive training. CDAM's role with the training-based methods is beyond the scope of this training-free work.\\n\\n> **[Q2.2] What specific challenges could arise when applying the heuristic entropy-based background thresholding in complex scenes with ambiguous background features? Are there data-driven approaches that could replace the heuristic thresholding method to improve robustness and adaptability?**\\n\\nThe most significant challenge with complex scenes lies in its inability to determine patch-wise threshold values effectively. For simpler scenes (e.g., where a single target-class object is prominently located in the center of the image), using a single threshold value across the entire image may suffice. However, as scenes become more complex, local thresholding becomes necessary, which introduces significant challenges. While our proposed thresholding method has shown excellent performance over VOC21, Context60 and COCO-Obj, leveraging Multi Otsu algorithm to modify our proposed method can have a good potential for more challenging scenes. However, the immediate challenge for this extension lies in secure related datasets with complex scenes and corresponding complex annotations, which is beyond the scope of this work.\\n\\nTo address these challenges and enhance the robustness of background detection, data-driven approaches like FOUND (Simeoni et al., CVPR 2023), utilized in CLIP-DIY, can be considered. FOUND leverages DINO in an unsupervised framework to effectively extract background regions. However, its reliance on saliency-based foreground detection without knowledge of target classes limits its adaptability in open-vocabulary semantic segmentation. While data-driven methods show promise, achieving robust and universally adaptable background thresholding across all scenarios remains a significant challenge.\"}", "{\"comment\": \"The authors' responses are greatly appreciated. As most of my concerns are resolved, I would raise my score.\"}", "{\"comment\": \"We are pleased to have addressed most of your concerns and the clarifications in our responses have been incorporated into the revised paper. We also sincerely appreciate your suggestion to further clarify our work.\\n\\nAs suggested, in addition to providing experimental validation of the effectiveness of JS divergence, we will include a discussion on the rationale for using JS divergence over others in the construction of our CDAM as below. It seems natural for the similarity of class distributions to have the properties of symmetricity (should be the same regardless of the input order) and permutation invariance (should be the same regardless of class order). The former is satisfied by Wasserstein (WS) distance and JS divergences and the latter is satisfied by KL and JS divergences. Note that it is not straightforward to properly define metrics among classes for the problem of semantic segmentation, which makes WS distance not suitable for our CDAM. Moreover, divergence-based metrics such as KL and JS divergences are known to be more sensitive to small changes over WS distance [1], which may be advantageous for our CDAM. Thus, JS distance seems to have favorable properties that can be used for measuring the distance between class distributions, wihch is consistent with our experimental validation that we have provided before.\\n\\n[1] Ozair, Sherjil, et al. \\\"Wasserstein dependency measure for representation learning.\\\" NeurIPS (2019).\"}", "{\"comment\": \"> **[Q1] Similar to Table 3, an additional ablation study on CDAM components using another baseline model would be beneficial.**\\n\\nWe conducted additional experiments with several alternative baseline models, including MaskCLIP, SCLIP, ClearCLIP, and GEM, as suggested. As shown in Table 2 (below), the proposed CDAM components improve open-vocabulary semantic segmentation performance without requiring additional training. We have updated Table 3 in the main paper and revised Section 4.3 accordingly.\\n\\n**Table 2:** Ablation study on components of our CDAM with several baseline methods. We measured the performance on VOC21. Ba denotes Baseline, At-C denotes $\\\\text{Attn} _\\\\text{CDAM}$, At-M denotes $\\\\text{Attn} _\\\\text{MS}$, ATP refers to the augmented text prompts and Th-e denotes $\\\\text{Thr} _\\\\text{ent-bg}$. The evaluation is based on mIoU(\\\\%).\\n\\n|Ba|At-C|At-M|ATP|Th-e|MaskCLIP|SCLIP|ClearCLIP|GEM|\\n|-|-|-|-|-|-|-|-|-|\\n|\\u2714| | | | |33.1|50.5|50.7|52.1|\\n|\\u2714|\\u2714| |||50.1|55.0|52.1|54.7|\\n|\\u2714|\\u2714|\\u2714|||53.7|56.9|55.8|56.5|\\n|\\u2714|\\u2714|\\u2714|\\u2714||54.7|57.2|56.0|56.9|\\n|\\u2714|\\u2714|\\u2714|\\u2714|\\u2714|**55.9**|**59.0**|**57.6**|**58.7**|\\n\\n> **[Q2] Since CDAM relies on distributions between patches, an ablation study on the impact of different patch sizes would be beneficial.**\\n\\nTo analyze the impact of different patch sizes on our proposed CDAM, we have conducted benchmark experiments using CLIP ViT/B-32 (patch size = 32), reporting the results in Table 3 below. Note that our submission presented the results using CLIP ViT/B-16 (patch size = 16). Other patch size experiment requires a new pre-trained CLIP model, but there was no model with the patch size of 8. \\n\\nTable 3 demonstrates that our proposed CDAM effectively enhances the performance of CLIP-based training-free segmentation methods with a larger patch size. It also shows that large patch size (32) usually resulted in decreased performance as compared to small patch size (16) in Table 1 of the main paper. It seems that small patch size (16) is advantageous over large patch size (32) for segmentation due to better spatial resolution of segmentation. Detailed results are included in the suppl material (Line 1023-1060).\\n\\n**Table 3:** Ablation study with CLIP ViT/B-32 for exploring different patch sizes. Performance improvements achieved by CDAM are indicated in parentheses. The evaluation metric used is mIoU (%).\\n\\n|Method|VOC21|Context60|COCO-Obj|Avg.|\\n|-|-|-|-|-|\\n|MaskCLIP|29.5|8.1|11.5|16.4|\\n|MaskCLIP+**CDAM**|50.1 (+20.6)|27.6 (+19.5)|27.8 (+16.3)|35.2 (+18.8)|\\n|SCLIP|38.0|24.1|25.1|29.1|\\n|SCLIP+**CDAM**|51.6 (+13.6)|25.7 (+1.6)|27.6 (+2.5)|35.0 (+5.9)|\\n|ClearCLIP|47.6|23.3|27.3|32.7|\\n|ClearCLIP+**CDAM**|51.4 (+3.8)|27.5 (+4.2)|28.4 (+1.1)|35.8 (+3.1)|\\n|GEM|52.1|28.1|33.8|38.0|\\n|GEM+**CDAM**|**55.9** (+3.8)|**32.4** (+4.3)|**34.4** (+0.6)|**40.9** (+2.9)|\"}", "{\"comment\": \"> **[W3] The paper lacks discussion of other relevant methods. Methods such as [1], [2], and [3] involve reusing the CLIP [CLS] token and optimizing the feature space, which could enhance CLIP's performance in region recognition. It remains unclear whether these methods could also address the issues raised in this paper.**\\n\\nThere are a number of fundamental differences between the aforementioned methods [1, 2, 3] and our proposed CDAM, but one important difference is that the former uses ground truth labels for supervised training, but the latter, our CDAM, is training-free, thus no ground truth needed. The use of ground truth label makes it hard to analyze if the methods of [1, 2, 3] can potentially address the raised issue without ground truth labels just like our CDAM. We have incorporated this discussion into the revised paper under the \\\"Related Works\\\" section to address this point more comprehensively.\\n\\n> **[W4] In Table 2, the performance of ClearCLIP is significantly higher than that of SCLIP, as reported in the original ClearCLIP results. However, after incorporating CDAM, SCLIP outperforms ClearCLIP. The reason for this performance discrepancy requires further explanation.**\\n\\nThis discrepancy can be explained by the structural differences between SCLIP and ClearCLIP. ClearCLIP removed the residual connection in its architecture to suppress noisy class predictions. Since CDAM uses class distribution similarity to overcome noisy class prediction, the advantage of ClearCLIP had been dimished, thus less performace performance gain was obtained with CDAM.\"}", "{\"comment\": \"Dear Reviewer ehJ5,\\n\\nWe sincerely appreciate your valuable feedback and thoughtful comments. We have carefully addressed your concerns, particularly regarding the sufficient support for our motivation and the explanations of the techniques (JS divergence). As the discussion period is drawing to a close, we would be most grateful if you could kindly let us know whether our responses have sufficiently resolved your concerns. Please let us know if further clarifications are needed.\\n\\nThank you again for your time and kind consideration.\\n\\nBest regards, Authors\"}", "{\"comment\": \"We appreciate the reviewer's valuable and insightful feedbacks.\\n\\n> **[Q1] How does the complexity introduced by the Jensen-Shannon divergence calculation and multi-scale patch analysis impact the model\\u2019s runtime and feasibility in real-time applications?**\\n\\nWe have conducted two experiments (1 NVIDIA A100 GPU used): (1) a comparison of inference time with several baselines (Table 1) and (2) an analysis of the inference time for each component of our CDAM (Table 2). Considering how much CDAM can improve performance (to the level of CaR!), the additional computation time can be justified and one can still obtain the result for one image within at most 60 msec (0.06 sec), which may still suitable for many real-time applications. Note that Table 2 shows that the majority of the computational cost in CDAM arises from its multi-scale image patches ($\\\\text{Attn}\\\\_\\\\text{MS}$) and augmented text prompts (ATP). The computational overhead increases with the number of scales ($m$) for $\\\\text{Attn}_\\\\text{MS}$ and the number of classes for ATP, primarily due to the increased computational burden of computing Jensen-Shannon (JS) divergence. These results have been included in the revised paper (Line 460-464 and Line 782-804).\\n\\n**Table 1:** Inference time comparison (in seconds per image) of baseline methods with our CDAM. Despite introducing minimal computational overhead, CDAM remains feasible for real-time applications, especially when compared to computationally intensive methods like CaR and CLIP-DIY.\\n\\n|Method|VOC21|Context60|COCO-Obj|\\n|-|-|-|-|\\n|CaR|3.497 sec|9.340 sec|12.270 sec|\\n|CLIP-DIY|0.520 sec|-|0.559 sec|\\n|MaskCLIP|0.017 sec|0.017 sec|0.017 sec|\\n|MaskCLIP+**CDAM**|0.043 sec (+0.026)|0.049 sec (+0.032)|0.051 sec (+0.034)|\\n|SCLIP|0.018 sec|0.018 sec|0.018 sec|\\n|SCLIP+**CDAM**|0.044 sec (+0.026)|0.050 sec (+0.032)|0.052 sec (+0.034)|\\n|ClearCLIP|0.017 sec|0.018 sec|0.018 sec|\\n|ClearCLIP+**CDAM**|0.044 sec (+0.027)|0.050 sec (+0.032)|0.051 sec (+0.033)|\\n|GEM|0.026 sec|0.026 sec|0.026 sec|\\n|GEM+**CDAM**|0.052 sec (+0.026)|0.059 sec (+0.033)|0.060 sec (+0.034)|\\n\\n**Table 2:** Inference time (in seconds per image) for each component of CDAM. The baseline model is set as MaskCLIP, as the additional overhead introduced by CDAM is consistent across other baseline methods. Ba denotes Baseline, At-C denotes $\\\\text{Attn} _\\\\text{CDAM}$, At-M denotes $\\\\text{Attn} _\\\\text{MS}$, ATP refers to the augmented text prompts and Th-e denotes $\\\\text{Thr} _\\\\text{ent-bg}$.\\n\\n|Ba|At-C|At-M|ATP|Th-e|VOC21|Context60|COCO-Obj\\n|-|-|-|-|-|-|-|-|\\n|\\u2714| | | | |0.017 sec|0.017 sec|0.017 sec|\\n|\\u2714|\\u2714| |||0.020 sec|0.022 sec|0.024 sec|\\n|\\u2714|\\u2714|\\u2714|||0.032 sec|0.038 sec|0.040 sec|\\n|\\u2714|\\u2714|\\u2714|\\u2714||0.043 sec|0.049 sec|0.051 sec|\\n|\\u2714|\\u2714|\\u2714|\\u2714|\\u2714|0.043 sec|0.049 sec|0.051 sec|\"}", "{\"comment\": \"We appreciate your recognition on our method's simplicity and effectiveness.\\n\\nIt seems that your question may stem from some existing works such as [1,2] that used mask annotations for specific classes in supervised ways (e.g., prompt tuning and/or tuning additional decoder) while their titles contain \\\"zero-shot\\\" to emphasize their capability of deal with unseen classes as well. These methods, thus, can be evaluated for both seen and unseen classes. See CaR[3] and SegCLIP[4] for the clear differences between mask annotation-free and annotation-based methods. While our work has focused on annotation-free methods (see below for details), we also share your concern on the potential catastrophic forgetting for seen classes. Thus, we performed additional experiments for mask annotation-based supervision methods with our CDAM and the results are reported in Table 2. As demonstrated, our CDAM did not degrade the performance for seen classes that were optimized by the training-based methods with mask annotation (ZegCLIP[1], OTSeg[2]) while consistently improving the performance for unseen classes.\", \"table_2\": \"The results of semantic segmentation methods supervised with segmentation mask supervision [1,2] + CDAM on VOC 2012. hIoU refers to harmonic mean IoU.\\n\\n|Method|mIoU (Seen)| mIoU (Unseen) | hIoU|\\n|-|-|-|-|\\n|ZegCLIP[1]| 91.8|77.9| 84.3|\\n|ZegCLIP[1] + CDAM | 91.8 (+0.0)| 78.4 (+0.5)|84.6 (+0.3)|\\n|OTSeg[2]| 93.3|81.8|87.2|\\n|OTSeg[2] + CDAM | 93.4 (+0.1)|82.2 (+0.4)|87.4 (+0.2)|\\n\\nAs you may know, if a model is trained or tuned with mask annotations for some classes, those classes are called \\\"seen classes.\\\" In contrast, the methods like our proposed method and 16 compared prior arts in Table 1 of the main paper (e.g., SCLIP, ClearCLIP, SegCLIP, FreeDA, etc.) do NOT use any mask annotation. To clarify further, there are three distinct categories for the methods: (1) **mask annotation-based** supervision methods (e.g., ZegCLIP, OTSeg), thus having \\\"seen classes\\\" (2) **mask annotation-free, weakly-supervised** methods that leverage image-text paired datasets (e.g., GroupViT, SegCLIP, etc.), thus NOT having \\\"seen classes\\\" and (3) **mask annotation-free**, training-free methods, including our method (e.g., SCLIP, ClearCLIP, CaR, etc.), thus clearly NOT having \\\"seen classes\\\". Since all methods in Table 1 (Category 2 and 3) were neither trained nor tuned with any mask annotation, there is no seen class. Thus, the evaluations of ours as well as these other prior works have focused exclusively on unseen classes. This has been clarified in the \\\"Related Works\\\" section of the revised paper.\\n\\n[1] Zhou, Ziqin, et al. \\\"Zegclip: Towards adapting clip for zero-shot semantic segmentation.\\\" CVPR (2023).\\n\\n[2] Kim, Kwanyoung, et al. \\\"OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation.\\\" ECCV (2024).\\n\\n[3] Sun, Shuyang, et al. \\\"Clip as rnn: Segment countless visual concepts without training endeavor.\\\" CVPR (2024).\\n\\n[4] Luo, Huaishao, et al. \\\"Segclip: Patch aggregation with learnable centers for open-vocabulary semantic segmentation.\\\" PMLR (2023).\"}", "{\"comment\": \"The author has addressed most of my concerns, so I will increase the score. Regarding the JS divergence issue, the experimental validation can only prove its effectiveness; the paper should further discuss the underlying reasons for the effectiveness of the method in relation to JS divergence.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"> **[Q3] A detailed analysis of the computational complexity of the CDAM would be helpful. Additional experiments related to runtime or efficiency when constructing CDAM would add value.**\\n\\nWe have conducted two experiments (1 NVIDIA A100 GPU used): (1) a comparison of inference time with several baselines (Table 4) and (2) an analysis of the inference time for each component of our CDAM (Table 5). Considering how much CDAM can improve performance (to the level of CaR!), the additional computation time can be justified and one can still obtain the result for one image within at most 60 msec (0.06 sec), which may still suitable for many real-time applications. Note that Table 5 shows that the majority of the computational cost in CDAM arises from its multi-scale image patches ($\\\\text{Attn} _\\\\text{MS}$) and augmented text prompts (ATP). The computational overhead increases with the number of scales ($m$) for $\\\\text{Attn} _\\\\text{MS}$ and the number of classes for ATP, primarily due to the increased computational burden of computing Jensen-Shannon (JS) divergence. These results have been included in the revised paper (Line 460-464 and Line 782-804).\\n\\n**Table 4:** Inference time comparison (in seconds per image) of baseline methods with our CDAM.\\n\\n|Method|VOC21|Context60|COCO-Obj|\\n|-|-|-|-|\\n|CaR|3.497 sec|9.340 sec|12.270 sec|\\n|CLIP-DIY|0.520 sec|-|0.559 sec|\\n|MaskCLIP|0.017 sec|0.017 sec|0.017 sec|\\n|MaskCLIP+**CDAM**|0.043 sec (+0.026)|0.049 sec (+0.032)|0.051 sec (+0.034)|\\n|SCLIP|0.018 sec|0.018 sec|0.018 sec|\\n|SCLIP+**CDAM**|0.044 sec (+0.026)|0.050 sec (+0.032)|0.052 sec (+0.034)|\\n|ClearCLIP|0.017 sec|0.018 sec|0.018 sec|\\n|ClearCLIP+**CDAM**|0.044 sec (+0.027)|0.050 sec (+0.032)|0.051 sec (+0.033)|\\n|GEM|0.026 sec|0.026 sec|0.026 sec|\\n|GEM+**CDAM**|0.052 sec (+0.026)|0.059 sec (+0.033)|0.060 sec (+0.034)|\\n\\n\\n**Table 5:** Inference time (in seconds per image) for each component of CDAM. The baseline model is set as MaskCLIP, as the additional overhead introduced by CDAM is consistent across other baseline methods. Ba denotes Baseline, At-C denotes $\\\\text{Attn} _\\\\text{CDAM}$, At-M denotes $\\\\text{Attn} _\\\\text{MS}$, ATP refers to the augmented text prompts and Th-e denotes $\\\\text{Thr} _\\\\text{ent-bg}$.\\n\\n|Ba|At-C|At-M|ATP|Th-e|VOC21|Context60|COCO-Obj\\n|-|-|-|-|-|-|-|-|\\n|\\u2714| | | | |0.017 sec|0.017 sec|0.017 sec|\\n|\\u2714|\\u2714| |||0.020 sec|0.022 sec|0.024 sec|\\n|\\u2714|\\u2714|\\u2714|||0.032 sec|0.038 sec|0.040 sec|\\n|\\u2714|\\u2714|\\u2714|\\u2714||0.043 sec|0.049 sec|0.051 sec|\\n|\\u2714|\\u2714|\\u2714|\\u2714|\\u2714|0.043 sec|0.049 sec|0.051 sec|\\n\\n> **[Q4] Comparing Table 1 and Table 2, there is a noticeable difference in CDAM\\u2019s improvement with/without the background class. The authors should provide more explanation for these differences to clarify their impact on performance.**\\n\\nThese differences stem from the different datasets and their corresponding evaluation metrics. Table 1 used the datasets with \\\"background\\\" class so that the background must be predicted accurately for better performance. However, Table 2 used the datasets where no background class is defined so that the accurate background estimation does not take into account for performance evaluation. Thus, the observed differences in CDAM's performance improvement between datasets (Table 1 vs. Table 2) are primarily attributed to the evaluation results with and without background class. Specifically, in Table 2 (no background class), inaccurate predictions in background areas have minimal influence on the evaluation metric, reducing their impact on the reported performance. Thus, CDAM's advantage of predicting background accurately with reduced false positive led to significant performance improvement in Table 1, but smaller performance gain in Table 2. This explanation has been included in the revised paper (Line 414-417).\"}", "{\"summary\": \"Current CLIP-based Open-Vocabulary Semantic Segmentation (OVSS) methods face limitations due to noisy patch-wise class predictions and highly correlated class distributions for each object. To address these issues, the authors propose a Class Distribution-induced Attention Map (CDAM), generated using the Jensen-Shannon divergence between class distributions of two patches from the same object, to enhance focus on class-relevant regions without additional training. The authors also introduce enhancements such as multi-scale image patches, augmented text prompts, and entropy-based background thresholding to further improve CDAM. Comprehensive experiments demonstrate that CDAM improves multiple OVSS methods across several datasets, with ablation studies validating the effectiveness of each component.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The motivation is good, with CDAM introducing a novel approach to leverage class distributions for enhancing OVSS performance, offering a new pathway for training-free segmentation improvements.\\n2. The paper is well-written, with clear explanations of the methodology, experimental settings and results.\\n3. CDAM is training-free, and the authors provide comprehensive quantitative results demonstrating the effectiveness of each component.\", \"weaknesses\": \"Writing Suggestions\\n1. The authors state that \\u201cCLIP-based prior works yield patch-wise noisy class predictions while having highly correlated class distributions for each object.\\u201d Is this conclusion based on statistical analysis, or is it an observation from limited examples? Providing more statistical evidence would make this argument more convincing.\\n2. In the third paragraph of Section 1 (lines 51 to 60), the authors should provide additional details on how CDAM is constructed and, more importantly, explain why it is effective. Focusing on why it works would strengthen this section.\\n3. The order of Figure 1 and Figure 2 should be swapped, as Figure 2 is referenced before Figure 1 (lines 125 to 126).\", \"questions\": \"1. Similar to Table 3, an additional ablation study on CDAM components using another baseline model would be beneficial.\\n2. Since CDAM relies on distributions between patches, an ablation study on the impact of different patch sizes would be beneficial.\\n3. A detailed analysis of the computational complexity of the CDAM would be helpful. Additional experiments related to runtime or efficiency when constructing CDAM would add value.\\n4. Comparing Table 1 and Table 2, there is a noticeable difference in CDAM\\u2019s improvement with/without the background class. The authors should provide more explanation for these differences to clarify their impact on performance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a Class Distribution-induced Attention Map (CDAM) to enhance the capability of CLIP features in representing different categories for open-vocabulary semantic segmentation. The proposed method can be easily embedded into various approaches to boost their performance. Additionally, the authors introduce an entropy-based background thresholding technique for semantic segmentation inference to facilitate the extraction of foreground classes. The experiments demonstrate the effectiveness of the proposed methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written, with a clear and accessible presentation.\\n2. The authors conducted comprehensive experiments that effectively highlight the superiority of the proposed method.\", \"weaknesses\": \"1. The authors claim that CLIP-based prior works yield patch-wise noisy class predictions while having highly correlated class distributions for each object, but this lacks necessary validation. Although an analysis of CLIP and MaskCLIP is provided in the methods section to support this claim, the analysis is not sufficiently general. MaskCLIP is an earlier work, and more recent research in the field may have addressed patch-wise noisy class predictions. Thus, the argument may not be robust, and the paper's novelty compared to related works remains debatable.\\n\\n2. Jensen-Shannon divergence is a key technique used in this paper. However, there is insufficient discussion on the necessity of using this method and why alternative techniques would be inadequate.\\n\\n3. The paper lacks discussion of other relevant methods. Methods such as [1], [2], and [3] involve reusing the CLIP [CLS] token and optimizing the feature space, which could enhance CLIP's performance in region recognition. It remains unclear whether these methods could also address the issues raised in this paper.\\n\\n4. In Table 2, the performance of ClearCLIP is significantly higher than that of SCLIP, as reported in the original ClearCLIP results. However, after incorporating CDAM, SCLIP outperforms ClearCLIP. The reason for this performance discrepancy requires further explanation.\\n\\n[1] Side Adapter Network for Open-Vocabulary Semantic Segmentation\\n\\n[2] Learning Mask-aware CLIP Representations for Zero-Shot Segmentation CLIP-Adapted Region-to-Text Learning\\n\\n[3] AlignZeg: Mitigating Objective Misalignment for Zero-shot Semantic Segmentation\", \"questions\": \"My primary concern is that the motivation for this work lacks sufficient support, and there is a lack of necessary explanation for some of the techniques used in the proposed method, as outlined in the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate your thoughtful review and valuable feedback.\\n\\n> **[W1] In Table 1, even though CaR is a heavily computational method and CLIP-DIY uses an extra background extractor, the proposed CDAM is not integrated into CaR and CLIP-DIY, and the best performance is not achieved on VOC21.**\\n\\nMany recent CLIP-based training-free classification methods rely on task-agnostic local visual tokens and our CDAM can be seamlessly integrated with them for improved performance. However, CaR and CLIP-DIY use task-specific CLS tokens for region classification, so it is not straightforward to incorporate them with our CDAM yet.\\n\\nNote that our CDAM with GEM outperformed all prior works including CaR and CLIP-DIY on average over all benchmark datasets as well as on both Context60 and COCO-Obj. Our CDAM with SCLIP yielded the second best performance (comparable to the first!) on VOC21, but with incredibly fast inference time (CaR 59.4% with 3.497 sec per image vs. SCLIP+CDAM 59.0% with 0.043 sec per image). See the below table for inference time.\", \"table_1\": \"Inference time comparison (seconds per image).\\n\\n|Method|VOC21|Context60|COCO-Obj|\\n|-|-|-|-|\\n|CaR|3.497 sec|9.340 sec|12.270 sec|\\n|CLIP-DIY|0.520 sec|-|0.559 sec|\\n|MaskCLIP|0.017 sec|0.017 sec|0.017 sec|\\n|MaskCLIP+**CDAM**|0.043 sec (+0.026)|0.049 sec (+0.032)|0.051 sec (+0.034)|\\n|SCLIP|0.018 sec|0.018 sec|0.018 sec|\\n|SCLIP+**CDAM**|0.044 sec (+0.026)|0.050 sec (+0.032)|0.052 sec (+0.034)|\\n|ClearCLIP|0.017 sec|0.018 sec|0.018 sec|\\n|ClearCLIP+**CDAM**|0.044 sec (+0.027)|0.050 sec (+0.032)|0.051 sec (+0.033)|\\n|GEM|0.026 sec|0.026 sec|0.026 sec|\\n|GEM+**CDAM**|0.052 sec (+0.026)|0.059 sec (+0.033)|0.060 sec (+0.034)|\\n\\n> **[Q1] It's better to give the mIoU both for seen and unseen classes.**\\n\\nAs you suggested, open-vocabulary semantic segmentation methods can be evaluated either for seen / unseen classes or unseen classes only. However, we have focused on unseen classes only since our method is training-free and all 16 prior works in Table 1 including CaR, CLIP-DIY, MaskCLIP, SCLIP, ClearCLIP, GEM were also evaluated for unseen classes only. Extending this work for the case with seen / unseen classes, however, can be a great future work.\"}", "{\"summary\": \"This paper introduces CDAM for open-vocabulary semantic segmentation that improves object localization by utilizing class distribution similarities within patches. It employs Jensen-Shannon divergence to create an attention map that boosts CLIP's segmentation accuracy without extra training. CDAM also uses multi-scale patches and entropy-based thresholding for enhanced performance, outperforming other methods on segmentation benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper improves the localization of objects within images by leveraging class distribution similarities.\\n\\n2. The entropy-based background thresholding adapts dynamically to different images, which helps in accurately separating the foreground from the background in segmentation tasks.\", \"weaknesses\": \"1. There is little explanation why the Jensen-Shannon divergence is suitable for semantic segmentation. The rationality of this method should be explained clearly.\\n\\n2. There are some confusing aspects in the use of symbols in this paper, such as the final similarity map S in line 251 and the class distribution S in line 266; The image representation is unclear, for example, the meaning of S_p1 in Figure 1 has not been mentioned yet in the paper.\\n\\n3. The JS divergence is used in the paper to obtain Attn_CDAM. How is the effect of using KL divergence and other measurement methods.\\n\\n4. What is the inference time after adding CDAM to other methods.\\n\\n5. The word 'food' appears twice in the Supercategory in Tab 4. An additional \\u2018()\\u2019 appeared in line 330\", \"questions\": \"1. How to explain the rationality of using two hyper-parameters \\u03b1 and Thr_default to control the background thresholding in Formula 4?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"There is no ethics concerns.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **[Q3] Why did the authors focus their analysis primarily on MaskCLIP, and what advantages or disadvantages does this bring in evaluating CDAM\\u2019s effectiveness? How would CDAM\\u2019s performance and computational requirements compare if implemented on more recent segmentation models like CaR or others?**\\n\\nWe chose MaskCLIP initially since it was a well-known method in the community (468 citations). To relieve your concern, we have performed further analysis with more recent baselines, which was reported in Table 3 below. Table 3 provides statistical evidence to support our claim over diverse recent baselines, reporting the results of the following experiment: For a given image, one patch $P_{target}$ was randomly selected and then two patches $P_{in}$ and $P_{out}$ were randomly selected from the target class region and the rest of the region, respectively. Then, we measure (1) the probability of {class prediction in $P_{target}$ is correct} and (2) the probability of {distribution similarity between $P_{target}$ and $P_{in}$ < distribution similarity between $P_{target}$ and $P_{out}$}. These results clearly support that our claim is still valid even for more recent CLIP-based prior works such as SCLIP, ClearCLIP and GEM. These results were reflected in the revision (Line 211-221).\\n\\nIn addition, we have conducted an ablation study to validate the effectiveness of our proposed CDAM components over various baselines including recent works such as SCLIP, ClearCLIP, and GEM. Table 4 shows that the addition of our CDAM components consistently improved open-vocabulary semantic segmentation performance without requiring additional training. These new studies strengthen our claim and demonstrate the robustness and novelty of our approach over diverse prior arts. The corresponding updates have been incorporated into the revised paper (Section 4.3)\\n\\nLastly, many recent CLIP-based training-free classification methods rely on task-agnostic local visual tokens and our CDAM can be seamlessly integrated with them for improved performance. However, CaR and CLIP-DIY use task-specific CLS tokens for region classification, so it is not straightforward to incorporate them with our CDAM yet. We have included this clarification in the revised paper (Line 377-405).\\n\\n**Table 3:** Accuracy comparison of class predictions and similarity of class distributions with several CLIP-based training-free methods across datasets. Similarity of class distribution is measured using JS divergence.\\n\\n|Baseline||VOC21|Context60|COCO-Obj|Avg.|\\n|-|-|-|-|-|-|\\n|MasKCLIP|Class Prediction|56.1 $\\\\pm$ 1.17|38.4 $\\\\pm$ 0.23|27.4 $\\\\pm$ 0.46|43.0|\\n||Sim of Class Dist|70.9 $\\\\pm$ 0.44|73.1 $\\\\pm$ 0.34|69.8 $\\\\pm$ 0.42|71.0|\\n|SCLIP|Class Prediction|67.0 $\\\\pm$ 0.49|41.8 $\\\\pm$ 0.31|33.6 $\\\\pm$ 0.23|47.4|\\n||Sim of Class Dist|78.9 $\\\\pm$ 0.26|72.0 $\\\\pm$ 0.30|75.4 $\\\\pm$ 0.55|75.5|\\n|ClearCLIP|Class Prediction|70.3 $\\\\pm$ 0.51|42.7 $\\\\pm$ 0.23|36.4 $\\\\pm$ 0.19|49.9|\\n||Sim of Class Dist|76.0 $\\\\pm$ 0.58|70.5 $\\\\pm$ 0.58|71.7 $\\\\pm$ 0.48|72.3|\\n|GEM|Class Prediction|70.8 $\\\\pm$ 1.12|42.4 $\\\\pm$ 0.38|37.5 $\\\\pm$ 0.45|50.0|\\n||Sim of Class Dist|79.4 $\\\\pm$ 0.89|71.2 $\\\\pm$ 0.34|74.2 $\\\\pm$ 0.20|74.8|\\n\\n\\n**Table 4:** Ablation study on components of our CDAM with several baseline methods. We measured the performance on VOC21. ATP refers to the augmented text prompts. Ba denotes Baseline, At-C denotes $\\\\text{Attn} _\\\\text{CDAM}$, At-M denotes $\\\\text{Attn} _\\\\text{MS}$, ATP refers to the augmented text prompts and Th-e denotes $\\\\text{Thr} _\\\\text{ent-bg}$. The evaluation is based on mIoU(\\\\%).\\n\\n|Ba|At-C|At-M|ATP|Th-e|MaskCLIP|SCLIP|ClearCLIP|GEM|\\n|-|-|-|-|-|-|-|-|-|\\n|\\u2714| | | | |33.1|50.5|50.7|52.1|\\n|\\u2714|\\u2714| |||50.1|55.0|52.1|54.7|\\n|\\u2714|\\u2714|\\u2714|||53.7|56.9|55.8|56.5|\\n|\\u2714|\\u2714|\\u2714|\\u2714||54.7|57.2|56.0|56.9|\\n|\\u2714|\\u2714|\\u2714|\\u2714|\\u2714|**55.9**|**59.0**|**57.6**|**58.7**|\\n\\n> **[Q4] How might the CDAM approach be adapted to address challenges in recognizing rare or fine-grained classes that are crucial in open-vocabulary segmentation?**\\n\\nAs discussed in [Q2.2], CLIP-based training-free methods, including our CDAM, SCLIP, ClearCLIP, and GEM, are constrained by CLIP's inherent limitations in handling rare or fine-grained classes due to the restricted class diversity and image-text representation in the pre-trained model. However, unlike other methods, CDAM integrates textual information of target classes to guide visual feature extraction through an attention-based process. Our approach leverages augmented text prompts, such as attributes (e.g., color, texture), to better localize these fine-grained or rare targets in the attention map, thereby mitigating these challenges to some extent.\"}", "{\"comment\": \"Thank you for your thoughtful feedback and for taking the time to review our responses. We are pleased that our responses have improved the clarity of our contributions, and we sincerely appreciate the increased score. The detailed responses have been incorporated into the revised paper. Once again, thank you for your valuable and constructive suggestions.\"}", "{\"comment\": \"I'm glad to see other reviewers raise their scores as I really like simple yet effective methods. For the performance of unseen classes, it just needs to run the evaluation code very quickly. I don't know why it cannot provided. I doubt whether the method was not good enough in seen classes. If a model worked well on unseen classes and it forgot the seen classes, I think it is not practical.\"}", "{\"comment\": \"Thank you for your detailed suggestions for improving the clarity of our work. We also appreciate the increase in your score. We have incorporated our responses into both the main paper and the supplementary material for better understanding and enhanced readability. Thank you once again for your valuable feedback.\"}", "{\"title\": \"Thanks\", \"comment\": \"Thank you for your detailed responses and clarifications. I greatly appreciate your efforts. Kindly incorporate the modified content into the main paper where possible. If this is not possible, please add them to the supplementary material. These additions will significantly enhance the paper\\u2019s readability. Based on these changes, I will increase my rating.\"}", "{\"comment\": \"We appreciate your constructive comments and suggestions.\\n\\n> **[W1] The authors state that \\u201cCLIP-based prior works yield patch-wise noisy class predictions while having highly correlated class distributions for each object.\\u201d Is this conclusion based on statistical analysis, or is it an observation from limited examples? Providing more statistical evidence would make this argument more convincing.**\\n\\nTable 1 provides statistical evidence to support our claim, reporting the results of the following experiment: For a given image, one patch $P_{target}$ was randomly selected and then two patches $P_{in}$ and $P_{out}$ were randomly selected from the target class region and the rest of the region, respectively. Then, we measure (1) the probability of {class prediction in $P_{target}$ is correct} and (2) the probability of {distribution similarity between $P_{target}$ and $P_{in}$ < distribution similarity between $P_{target}$ and $P_{out}$}. Clearly, our claim \\\"CLIP-based prior works yield patch-wise noisy class predictions while having highly correlated class distributions for each object\\\" can be supported by these results. These results were reflected in the revision (Line 211-221).\\n\\n**Table 1:** Accuracy comparison of class predictions and similarity of class distributions with several CLIP-based training-free methods across datasets. Similarity of class distribution is measured using JS divergence.\\n\\n|Baseline||VOC21|Context60|COCO-Obj|Avg.|\\n|-|-|-|-|-|-|\\n|MasKCLIP|Class Prediction|56.1 $\\\\pm$ 1.17|38.4 $\\\\pm$ 0.23|27.4 $\\\\pm$ 0.46|43.0|\\n||Sim of Class Dist|70.9 $\\\\pm$ 0.44|73.1 $\\\\pm$ 0.34|69.8 $\\\\pm$ 0.42|71.0|\\n|SCLIP|Class Prediction|67.0 $\\\\pm$ 0.49|41.8 $\\\\pm$ 0.31|33.6 $\\\\pm$ 0.23|47.4|\\n||Sim of Class Dist|78.9 $\\\\pm$ 0.26|72.0 $\\\\pm$ 0.30|75.4 $\\\\pm$ 0.55|75.5|\\n|ClearCLIP|Class Prediction|70.3 $\\\\pm$ 0.51|42.7 $\\\\pm$ 0.23|36.4 $\\\\pm$ 0.19|49.9|\\n||Sim of Class Dist|76.0 $\\\\pm$ 0.58|70.5 $\\\\pm$ 0.58|71.7 $\\\\pm$ 0.48|72.3|\\n|GEM|Class Prediction|70.8 $\\\\pm$ 1.12|42.4 $\\\\pm$ 0.38|37.5 $\\\\pm$ 0.45|50.0|\\n||Sim of Class Dist|79.4 $\\\\pm$ 0.89|71.2 $\\\\pm$ 0.34|74.2 $\\\\pm$ 0.20|74.8|\\n\\n> **[W2] In the third paragraph of Section 1 (lines 51 to 60), the authors should provide additional details on how CDAM is constructed and, more importantly, explain why it is effective. Focusing on why it works would strengthen this section.**\\n\\nWe have revised the third paragraph of Section 1 to emphasize why leveraging the similarity of class distributions is effective for generating attention maps, as implemented in CDAM, for segmentation. The reason why it works is now supported by a new statistical study in the above Table (see the response to [W1]).\\n\\n> **[W3] The order of Figure 1 and Figure 2 should be swapped, as Figure 2 is referenced before Figure 1 (lines 125 to 126).**\\n\\nWe have swapped the positions of Figure 1 and Figure 2 to align with the reference order.\"}" ] }
CMj18BQQDK
VideoPanda: Video Panoramic Diffusion With Multi-view Attention
[ "Kevin Xie", "Amirmojtaba Sabour", "Jiahui Huang", "Despoina Paschalidou", "Umar Iqbal", "Sanja Fidler", "Xiaohui Zeng" ]
High resolution panoramic video content is paramount for immersive experiences in Virtual Reality, but is non-trivial to collect as it requires specialized equipment and intricate camera setups. In this work, we introduce \ourmodel, a novel approach for synthesizing $360^\circ$ videos conditioned on text or single-view video data. \ourmodel leverages multi-view attention layers to augment a video diffusion model, enabling it to generate consistent multi-view videos that can be combined into immersive panoramic content. \ourmodel is trained jointly using two conditions: text-only and single-view video, and supports autoregressive generation of long-videos. To overcome the computational burden of multi-view video generation, we randomly subsample the duration and camera views used during training and show that the model is able to gracefully generalize to generating more frames during inference. Extensive evaluations on both real-world and synthetic video datasets demonstrate that \ourmodel generates more realistic and coherent $360^\circ$ panoramas across all input conditions compared to existing methods. Visit the project website at https://mvpanovideo.github.io/VideoPanda/ for results.
[ "video generation", "diffusion model", "panorama" ]
Reject
https://openreview.net/pdf?id=CMj18BQQDK
https://openreview.net/forum?id=CMj18BQQDK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xxosqYJPr9", "xP7y8uAJ3a", "vyML4OG6Ql", "vvxTfPdKof", "vYgjFDqmBU", "nRCoMYzBG2", "j6uyDTwCDd", "gqxRAMfd75", "dvwi97vS9Q", "ckL6cxzHR9", "cd89MHP9my", "W98YMBsPpQ", "VgeURs93Dd", "T4OAmVgbc6", "OnrU2CY3p6", "DHUx4fYbJC", "Ar52VcwpHd", "8vonNEOdYI", "8fi1H2cGKI", "8WioWziUcj", "7FL9Q2wmtv", "6lMReIMt0T", "6VJKQorAxK", "6CNX8INoC6", "4jiYwKF8TO", "2eKgBKWe6W", "1xHAMSkOCC", "14IbZlWzEQ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732624250774, 1732217698740, 1732554790441, 1732556109617, 1730533058660, 1734572063255, 1732316189779, 1730118009857, 1732648238952, 1733127108694, 1732716938198, 1732316901678, 1732294940049, 1730600256147, 1737523853594, 1732556769330, 1732556228919, 1730691834950, 1732294983804, 1732599903700, 1732316878062, 1732217820522, 1732648366892, 1732556267276, 1732556790215, 1732316361860, 1732673564190, 1732554590585 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7653/Reviewer_cZZr" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Reviewer_EySz" ], [ "ICLR.cc/2025/Conference/Submission7653/Area_Chair_LfHR" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Reviewer_U9nk" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Reviewer_TXRj" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Reviewer_cZZr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Reviewer_TXRj" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Reviewer_EySz" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ], [ "ICLR.cc/2025/Conference/Submission7653/Reviewer_U9nk" ], [ "ICLR.cc/2025/Conference/Submission7653/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the carefully prepared results and responses. Through using a more powerful base model, CogVideoX-2B, and leveraging additional panorama videos for training, the updated results have shown significant improvements over the previous ones. However, my concerns are only partially addressed.\\n\\nOn the one hand, the proposed method indeed generates visually promising results when the base model and training data are enhanced. On the other hand, the improvements and advantages of the proposed method are largely attributed to the capabilities of the base model and the larger training dataset, rather than the method itself. To me, compared to the baseline 360DVD, the advantage of the proposed method is still unclear. This is because 360DVD is based on a base model like AnimateDiff and uses the WEB360 training dataset. What would happen if these two enhancements were applied to 360DVD? I still need more justification for the superiority of the proposed method over existing baselines.\"}", "{\"title\": \"Summary Response to All (we will also individually reply to each reviewer shortly)\", \"comment\": \"We thank all the reviewers for their time and great feedback! To address some of the common concerns raised by the reviewers, we made the following additions.\\n\\n\\n**1 Expanding and Improving OOD Evaluations**\\n\\nWe greatly expanded and improved our OOD evaluations for both the text-conditional and video-conditional settings.\\n\\nOur text-conditional OOD evaluations are now using the full VBench suite of prompts (1700+ prompts) with great diversity and coverage of different dimensions and categories.\\n\\nOur video-conditional OOD evaluations are using the 50 videos we curate from publicly available AI generated videos and they can feature cases with non-standard FOV and elevation.\\n\\nWe show many additional qualitative examples now on our website (along with comparison to 360DVD). Please see the qualitative results at links below:\", \"ood_text_cond_vbench_prompts\": \"https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\", \"ood_col_cond_ai_generated_videos\": \"https://mvpanovideo.github.io/VideoPanda/release_col_cond_comp/release_col_cond_comp/index.html\\n\\nPlease take a look!\", \"update\": \"We have computed quantitative metrics for our OOD evaluation comparing VideoPanda to 360DVD for the text-conditional setting on using all 946 VBench-all_dimension prompts including Clip score, FVD to HDVila3k and FID to HDVila3k first frames. Unpaired FVD and FID was based on suggestion from Reviewer TXRj. We thank them very much for the suggestion. Our original VideoPanda model on VideoLDM (presented in the paper) performs better than 360DVD across all the metrics on this benchmark. Please see the quantitative metrics in the comment **(Point 7 OOD Evaluation: Quantitative Metrics)** above.\\n\\n\\n**2 Results on new base model (CogVideoX-2B)**\\n\\nMany reviewers were concerned about the OOD generalization ability of our VideoPanda model. \\nTo clarify, in the main paper we used the VideoLDM model (not SVD) which was presented in the work Align your Latents [Blattmann et al., 2023a] as cited in our paper. This model is a text-to-video model based on SD1.5 and is quite similar in design to AnimateDiff.\\n\\n\\nIt shows quite good generalization in the video-conditional setting but some OOD prompts can be challenging in the text-conditional setting. Please see the extended qualitative OOD video and text-conditional results on the updated websites linked above (in point 1) for a better understanding of the model\\u2019s capabilities.\\nWe found that this weakness on OOD text-conditional setting primarily stems from the weakness of the base model itself not understanding the prompts.\\n\\n\\nSo we implemented the VideoPanda method on top of a more powerful model, CogVideoX-2B which is a medium-sized transformer DiT.\\n\\nOur method readily applies without any major alterations. Per-frame multi-view attention layers are added to the model in between the 3D video attention that now operates on each of the views separately. The weights of the multiview-attention is initialized from the 3D video attention layer and zero-residual initialized.\\nWe finetune the full model end-to-end using the same dataset (WEB360). Additional training details will be included in the supplementary materials.\\n\\n\\nWith no major alterations to our framework, the new model VideoPanda (CogVideoX-2B) demonstrates large improvements in OOD generalization and video quality in both the text-conditional and video-conditional settings.\\n\\n\\nWe have also addressed each reviewer individually with our responses.\\nOnce again we want to thank the reviewers for their great feedback and suggestions.\\n\\n[Yang et al, 2024] Yang, Zhuoyi et al. \\u201cCogVideoX: Text-to-Video Diffusion Models with An Expert Transformer.\\u201d ArXiv abs/2408.06072 (2024): n. pag.\"}", "{\"title\": \"Point 7 OOD Evaluation: Quantitative Metrics\", \"comment\": \"**7 OOD Evaluation: Quantitative Metrics**\\n\\nAs mentioned earlier, we have improved and expanded our OOD evaluations. For the text-conditional setting, we now use the popular VBench evaluation prompts. These prompts offer good diversity and include fantastical examples, such as \\\"pandas sitting in a cafe,\\\" to stress test OOD generalization. Qualitative visual results are shown here: https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\\nTo evaluate our performance on OOD cases, we also compute and report some quantitative metrics. These are done for the text-conditional setting, in which we compare 360DVD against VideoPanda using VideoLDM (from the paper) as well as VideoPanda using CogVideoX-2B which we explained further in point 2 above.\", \"we_used_the_gpt_enhanced_prompts_released_by_vbench_here\": \"https://github.com/Vchitect/VBench/tree/master/prompts/gpt_enhanced_prompts\\n\\n**Clip Score:** We compute a clip score between our generated videos and the input prompt to quantify video-text alignment.\\n\\n**FID/FVD**: Following the suggestion by reviewer TXRj, we evaluated non-paired FID and FVD for the OOD text-conditional case (meaning that the true set of videos and our generated videos do not share the same prompts as we do not have a reference ground truth set to correspond with the prompts).\\nFor this we decided to use the popular video dataset HDVila for the reference set. In particular, we use 3,000 random videos from HDVila for FVD computations and use the first frames from the same set for FID computations. As this dataset contains perspective view data, we also extract perspective views from our generated panorama videos in order to perform a fair comparison.\\n\\n**Perspective View Extraction** In addition to the 8 horizontal (zero degree elevation) extracted views we used in the paper, we also included views with non-zero elevation. In particular, we create an additional setting where we extract 8 views in total with 4 views at negative 60 degree elevation looking downwards and another 4 views at positive 60 degree elevation looking upwards. The FOV is kept at 90 degrees. We refer to this setting as \\u201cElevation=+/-60degree Views\\u201d in the table below.\\nThese views better capture a complete picture of the panorama while still remaining within the distribution of natural camera angles.\", \"we_present_the_results_in_the_table_below\": \"| Text conditional Vbench All Dimensions (946 prompts x 3 seeds) | Elevation=+/-60degree 8 Views | | | Horizontal 8 Views (elevation=0) | | |\\n|-----------------------------------------------------------------|:-----------------------------:|:-----------------------:|:-----------:|:--------------------------------:|:-----------------------:|:-----------:|\\n| | FID (to HDVila3k frames) | FVD (to HDVila3k video) | Clip Score | FID (to HDVila3k frames) | FVD (to HDVila3k video) | Clip Score |\\n| 360DVD | 149.7 | 901.1 | 23.39 | 127.1 | 801.9 | 27.63 |\\n| Ours (VideoLDM + web360) | **130.6** | **826.6** | **24.12** | **112.2** | **677.8** | **27.78** |\\n| Ours (Cogvideo + web360) | **_109.9_** | **_675.9_** | **_25.99_** | **_97.2_** | **_624.7_** | **_29.33_** |\\n\\nThe original model we presented in our paper (VideoPanda using VideoVLDM base model) outperforms 360DVD across all of these metrics. \\nVideoPanda significantly outperforms 360DVD in the 60 degree elevation views, highlighting its superior ability to generate the ground and sky views, which are distorted in the equirectangular representation used by 360DVD.\\nAdditionally, the superior performance gained by using the CogVideoX-2B base model with VideoPanda is clearly seen by the quantitative evaluation above, with massive improvements observed across all metrics.\"}", "{\"title\": \"Reply to Reviewer cZZr Part 3\", \"comment\": \"> **Re: Question1 \\u201cHow many samples are used for the evaluation in Table 2? How do you collect the prompts? Do they cover good diversity?\\u201d**\\n\\nTable2 is our comparison to MVDiffusion on the in-distribution video-conditional setting. Table1 is our comparison to 360DVD on in-distribution text-conditional setting. Both of these in-distribution evaluations used the same data source.\\n- The number of ground truth panorama videos used for in-distribution evaluation is 100.\\n- These videos were selected from online panorama videos. To ensure, the selected videos do not overlap with WEB360, additional videos we sourced from airpano were limited to ones uploaded beyond the creation date of WEB360 and from another channel, \\u201cNational Geographic\\u201d.\\n- The prompts for the clips were obtained from captioning the input view extracted from the panorama with CogVLM.\\n- In terms of diversity, the clips are similar in distribution to the WEB360 data.\\n- We sampled one seed per prompt for doing the evaluations in the paper but for the new OOD evaluations we sample 3 seeds for each method.\\n\\nWe have greatly improved our OOD evaluations including quantitative metrics and using VBench prompts that cover a good diversity of different dimensions and categories.\\nPlease see the comment **(7 OOD Evaluation: Quantitative Metrics)** to all reviewers for more details regarding the OOD evaluations.\\n\\n\\n**> Re: Question2 \\u201cIt is hard to understand why multi-task training is better than single task training. This is contrast with some common feeling, for example all the T2V or I2V modes are processed with fixed frames, because varied number of tokens in the attention computation may hinder the performance on a specific frames and resolution. I would like to see more elaboration over this.\\u201d**\\n\\n(From points 5C & 5D in our response to all reviewers)\\n\\nWe want to clarify that in the paper the \\u201cmulti-task training\\u201d refers to training the same model to be capable of handling different types of conditioning (text-, video- and autoregressive). This model is more capable than a model only trained for one of the settings. Performance-wise the multi-task model is on par with the single task video-conditional model albeit slightly worse as shown in Table 3 of the paper. Please also see point 5C in our response to all reviewers for more detailed explanation.\\n\\nThe use of non-fixed frames is the \\u201crandom matrix\\u201d strategy we present in the paper. It is a computational technique to improve the model\\u2019s ability to generate full multiview videos of longer time horizons at test time for a given computational budget during training.\\n\\nOur random matrix strategy is explained in line 476 in the main paper. We wish to generate 8 views with 16 frames but because we are generating more views than single view training, it can not fit in GPU memory and also slows down training. We could still train with 8 views and 6 frames instead which does fit in memory. We call this strategy \\u201cfixed matrix\\u201d. Although simple, the mismatch in training frames (6) vs inference number of frames (16) results in blurry outputs which we show in the top row of Figure 7. We apologize for an oversight in the caption of Figure 7. It should say \\u201cfixed matrix\\u201d rather than \\u201cfull matrix\\u201d.\\n\\nInstead of the reduced fixed matrix training, we notice that one can still fit 3 views at 16 frames. By randomizing which 3 out of the 8 views we choose and randomizing other combinations that can fit in memory such as 4 views at 12 frames, we can strengthen the models ability to generalize to sampling more frames at test time. This strategy is called \\u201crandom matrix\\u201d and its improvement is illustrated in the bottom row of Figure 7. We quantitatively evaluated this strategy on the in-distribution video-conditional task in Table 3, where all quality metrics improve quite a bit but reconstruction PSNR is slightly lower which could be related to slightly better global color consistency for the fixed matrix model that is sees more training iterations featuring all views.\\n\\nNote that this technique is still applicable for extending the horizon even with larger compute and memory budget (for example even if we use Context Parallel to train with more frames, it doesn't make random matrix useless as it can still improve the performance when the test time temporal window is extended even longer for the same compute cost).\\n\\n\\n> **Re: Question3 \\u201cThe artifacts of full matrix shown in Figure. 7, actually also happen in the result of the proposed method (i.e. random matrix), such as in .. \\\"A view of a panda's face ...\\u201c.\\u201d**\\n\\nAlthough we do see that oversmoothing could happen for more OOD results, the random matrix strategy largely reduces these cases. We quantitatively evaluated this through ablating random matrix in the paper. Please see the results we present in Table 3 where the random matrix strategy is the middle row and greatly reduces the FVD and FID score compared to the last row which is not using it.\"}", "{\"summary\": \"This work present a method for generating videos from text or single-view video data, which employs multi-view attention to enhance a video diffusion model and produce consistent multi-view content suitable for immersive panoramas. The work demonstrates performance improvements and provides code.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The ideas are very easy to understand, reliable and well written.\\n\\n2. The boost in quantitative metrics looks good.\\n\\n3. Open source code for easy reproduction by readers\", \"weaknesses\": \"1.Poor qualitative results. I feel that the overfitting is evident in the effect, that is, the watermarks are being generated, and on closer inspection you can see the airpano.\\n\\n2.Some of the diagrams in the paper don't have good consistency when zoomed in.\\n\\n3.The technology is low-innovative, and multi-view concerns are common in the 3D AIGC field[1].\\n\\n[1] Liu J, Huang X, Huang T, et al. A comprehensive survey on 3D content generation[J]. arXiv preprint arXiv:2402.01166, 2024.\", \"questions\": \"See weakness.\\nI will check the author's response and revise the score after combining other review comments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper received overall negative scores. The reviewers liked the concept of extending single view videos into panoramic videos. At the same time, they listed a bunch of weaknesses, such as weaker overall results, semantic issues, not convincing evaluation, lack of novel ideas. The only reviewer (EySz) who gave a somewhat positive score, provided a superficial and very short review. Hence the recommendation is to reject the manuscript.\", \"additional_comments_on_reviewer_discussion\": \"There was a very fruitful discussion between authors and reviewers. As far as AC can tell the authors did a good job in trying to address the concerns. And indeed, the scores went up during the discussion period. The authors provided more results with a strong video model. This was appreciated by reviewers. Yet, this was not sufficient, as concerns remained: novelty, comparisons, dependence on the video model and so on.\"}", "{\"comment\": \"We thank the reviewer for their insightful comments and suggestions such as computing non-paired FVD. We address each of the points mentioned in the review below:\\n\\n**> Re: Weakness1 \\u201cThe model appears to be overfitted to the WEB360 dataset.\\u201d**\\n\\n(Please also see point 1&2 in our reply to all reviewers)\\n\\nTo address these concerns we have greatly expanded and improved our OOD evaluations for the text-conditional setting and show many more qualitative examples now for both settings on our updated websites. We have also trained a version of VideoPanda using the more powerful base model CogVideoX-2B and have found greatly improved generalization and visual quality.\\n\\n\\n**Video condition:** We would like to point out that in the video-conditional setting, our model demonstrates a stronger degree of generalization (beyond the WEB360 domain) as we have demonstrated with evaluation on the out of distribution videos. To further support this, we have expanded our OOD evaluation for the video-conditional setting and show many more examples on our updated website.\", \"please_check_some_qualitative_results_from_our_ood_video_cond_ai_generated_videos\": \"https://mvpanovideo.github.io/VideoPanda/release_col_cond_comp/release_col_cond_comp/index.html\\n\\nThese OOD evaluation videos were sourced from generated samples from SORA/Runway/Luma as downloaded from their respective websites. In total there are 50 videos used for the evaluation. We will add these additional details of the evaluation set in the supplementary materials.\\n\\n\\n**Text condition:**\\n\\nOur text-conditional OOD evaluations are now using the full VBench suite of prompts (1700+ prompts) with great diversity and coverage of different dimensions and categories. In the text-conditional case, bad results for OOD prompts are caused in part by our base model not understanding the prompt. We found that switching to a stronger base model (CogVideoX-2B) simply fixes it.\", \"please_check_some_qualitative_results_from_our_ood_text_cond_vbench_prompts\": \"https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\\n\\nWe have also computed quantitative metrics (Clip Score, FID and FVD to HDVila videos) for our OOD text-conditional evaluation on VBench prompts. Please see our comment to all reviewers \\u201c**7 OOD Evaluation: Quantitative Metrics**\\u201d.\\n\\n\\n**> Re: Weakness1 \\u201cThe test videos are directly sourced from the WEB360 dataset or selected from similar scenes on the airpano channel\\u201d**\\n\\nWe want to explicitly state that the test videos we used for our in-distribution evaluations were **NOT** sourced from WEB360. To ensure, the selected videos do not overlap with WEB360, additional videos we sourced from airpano were limited to ones uploaded beyond the creation date of WEB360 and from another channel, \\u201cNational Geographic\\u201d. For example the panda video you refer to on the website is from nat geo: https://www.youtube.com/watch?v=0XrH2WO1Mzs. Although they have related content type, the video is different.\\n\\nAs for the ice and mountains (in \\\"100666\\\"), this is from the in distribution text-conditional evaluation. As we mentioned in the paper (line 354), the in distribution text-cond evaluation uses the captions of the in-distribution evaluation videos. In this case, this text prompt corresponds to a caption generated from a clip in this \\u201cNational Geographic\\u201d video: https://www.youtube.com/watch?v=XPhmpfiWEEw&pp=sAQA.\\nWe agree that for the in-distribution evaluation, there can be similarities between the collected videos and videos in WEB360 which could result in similar text prompts to ones in WEB360 in some cases and we therefore supplemented these with our OOD evaluations.\\n\\nPlease kindly refer to our OOD text-condition and OOD video input evaluations above.\", \"ood_text_cond_vbench_prompts\": \"https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\", \"ood_col_cond_ai_generated_videos\": \"https://mvpanovideo.github.io/VideoPanda/release_col_cond_comp/release_col_cond_comp/index.html\", \"title\": \"Reply to Reviewer TXRj Part 1\"}", "{\"summary\": \"This paper proposes a framework for generating panoramic videos, based on a pretrained video generation model. The authors introduce multi-view generation capabilities by embedding several Multi-view Blocks. Additionally, they propose a random matrix training strategy, using videos of random frames and views to train the model, which increases the model's generalization to longer/more viewed videos while conserving computational resources. The authors trained the model using WEB360 and tested their method on 100 in-distribution and 100 out-of-distribution videos.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors successfully adapted a pretrained video generation model to the multi-view/panoramic video generation task, achieving relatively good visual results.\\n2. The proposed method supports using text or video as conditions to generate panoramic videos.\", \"weaknesses\": \"1. In terms of model design, the authors introduced a Multi-view Block on top of SVD (Stable Video Diffusion) to enable multi-view generation, a concept similar to MVDiffusion. However, while MVDiffusion focuses on panoramic image generation, this paper is designed for panoramic video generation, adding a temporal block (see Figure 2). Essentially, this paper can be seen as an application combining SVD and MVDiffusion, with the spatial and temporal blocks derived from SVD and the multi-view block from MVDiffusion. From this perspective, the novelty of the proposed method may be somewhat limited, so it would be beneficial for the authors to explain how their approach differs from these methods and whether they have made specific design choices to address challenges unique to this application.\\n2. Regarding the training strategy, the proposed random matrix strategy is essentially a compromise due to limited computational resources; theoretically, using more views or frames in training would yield better results. From the experimental results in Table 3, it can be seen that improvements in FID and FVD scores are achieved at the cost of PSNR (rows 2 and 3).\\nAs for the multi-task strategy proposed by the authors\\u2014randomly dropping some conditions (such as text, the first frame, or single-view video), or conditioning on only part of the information or a subset of modalities\\u2014is a common trick in training diffusion models. For example, dropping text is often used in text-to-image/video tasks [GLIDE, SVD, Imagen, etc.], or conditioning on text and image in image editing [InstructPix2Pix, Tim Brooks et al. 2022]. Therefore, it would be helpful for the authors to clarify how their approach differs from these methods to demonstrate the novelty of their method.\", \"questions\": \"1. In Table 3, after applying the multi-task training strategy, several metrics, such as FVD and PSNR, show a decline. Could the authors provide an appropriate explanation for this?\\n2. In section 4.6, the authors tested autoregressive long video generation, producing videos up to 61 frames. Have the authors attempted to generate even longer videos? As video length increases, does generation quality continuously degrade? If so, what feasible strategies might help mitigate this issue?\\n3. In Table 1, the authors compare their method with 360DVD and report performance improvements. However, 360DVD uses the text-to-image model SD1.5, while the proposed method uses the SVD video generation model, which inherently offers an advantage in video smoothness. Did the authors conduct a relatively fair comparison, such as applying SVD to 360DVD or using SD1.5 with the proposed method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarifying our comparison with 360DVD using fair data and model setting. Reply to: Official Comment by Reviewer cZZr\", \"comment\": \"We thank the reviewer for their quick reply and kind comments about our updated results and presentation! We very much appreciate your constructive feedback (such as suggesting non-paired FID/FVD for evaluation) which improved the quality of our rebuttal.\\n\\n\\nWe would like to clarify a few points. In our expanded OOD evaluations, we have made a fair comparison (in terms of data and base model) to 360DVD using the **original model from our paper**.\\n\\nAll of our quantitative and qualitative comparisons against 360DVD include our VideoPanda model based on **VideoLDM using the same data (WEB360)** which is a fair base model comparison. VideoLDM is based on SD1.5 and shares a very similar structure to AnimateDiff (the base model of 360DVD).\\n\\n**For quantitative comparisons:** \\n\\nWe conducted quantitative OOD text-conditional evaluation where our base model using VideoLDM strongly outperforms 360DVD in Clip Score, FID and FVD (to HDVila3k). We have copied the quantitative comparison here for your convenience:\\n| Text conditional Vbench All Dimensions (946 prompts x 3 seeds) | Elevation=+/-60degree 8 Views | | | Horizontal 8 Views (elevation=0) | | |\\n|-----------------------------------------------------------------|:-----------------------------:|:-----------------------:|:-----------:|:--------------------------------:|:-----------------------:|:-----------:|\\n| | FID (to HDVila3k frames) | FVD (to HDVila3k video) | Clip Score | FID (to HDVila3k frames) | FVD (to HDVila3k video) | Clip Score |\\n| 360DVD (AnimateDiff + web360) | 149.7 | 901.1 | 23.39 | 127.1 | 801.9 | 27.63 |\\n| Ours (VideoLDM + web360) | **130.6** | **826.6** | **24.12** | **112.2** | **677.8** | **27.78** |\\n\\nThese additional comparisons corroborate our findings in the paper on the in-distribution text-conditional setting (Table 1) where our method also greatly outperforms 360DVD.\\n\\n\\n**For qualitative comparisons:** (https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html )\\n\\n\\nThe left column is our VideoPanda with (VideoLDM+WEB360) and the right column is 360DVD (AnimateDiff+WEB360).\\n\\n**Multiview perspective generation VS direct equirectangular generation:** \\nOne main benefit of using multiview perspective generation compared to the equirectangular format used by 360dvd, is that we can more naturally generate content in the sky and ground views (above or below +/-60 degree elevation) that typically are highly distorted in the equirectangular format. This is very visible in the OOD text-conditional samples from 360DVD when viewing the sky or ground views using the provided VR viewer. We also showed this in the appendix contained in the supplementary material (please see Appendix Figure A5). \\n\\nViewing results in VR also highlights the strange motions in 360DVD including the ground morphing in a spiral direction (0) \\u201cIn a charming Parisian caf\\u00e9, a panda\\u201d\\nor inanimate objects sliding in different directions (17) \\u201cA serene scene unfolds with a delicate porcelain teacup\\u201d.\\nSuch errors are hard to visually spot from only looking at the equirectangular projection native to 360DVD but very important for the actual task of panorama video generation.\\n\\n\\n**Superior range of capability:**\\n\\nAdditionally, 360DVD only considers the text-conditional case and doesn't consider video-conditional or autoregressive generation at all. We can handle all cases with the same base model. I think this can also be seen as a superiority in the range of capability. \\n\\n\\nWe would like to reiterate that our contributions are introducing the video-conditional panorama generation. Our multi-task training strategy enables a unified model capable of flexible conditioning during inference (see point 5c) and the randomized matrix strategy improves the quality of test time generalization to more frames with less computational demands in training (see point 5d).\\n\\n\\nPlease also note that for all of the text-conditional comparison to 360DVD **we did not increase the training data**, but kept it the same (WEB360) for fair comparison. Specifically, both the left column VideoPanda with (VideoLDM+WEB360) and middle column VideoPanda with (CogVideoX-2B+WEB360) are using WEB360 dataset.\\n\\n\\nAgain we are very very thankful for your prompt and valuable feedback and please let us know if you still have concerns and we will be happy to actively address them!\"}", "{\"title\": \"Response for Rebuttal\", \"comment\": \"Thank you for the detailed response and the additional experiments provided. The use of a more powerful base model, CogVideoX, has indeed led to some improvements in the quality of panoramic video generation. I have carefully reviewed all reviewers' comments as well as the authors\\u2019 responses. However, the following concerns remain unresolved:\\n\\n1. Limited Novelty: The paper lacks specific innovations tailored to the task of panoramic video generation. As acknowledged in the authors\\u2019 response, the multi-view attention mechanism employed is identical to that of MVDream, without any special adaptations for the unique challenges of panoramic video generation. This limitation is reflected in the additional experiments, where noticeable multi-view stitching artifacts are still evident in VR previews, even when using the stronger CogVideoX model.\\n\\n2. Unfair Comparisons: As pointed out by reviewers U9nk and cZZr, performance improvements derived solely from adding data and switching to a more powerful base model do not constitute a fair comparison. Moreover, the authors have not updated the descriptions or clarified the methodology for FID and FVD metrics in the main text, nor have they incorporated the revised metrics into the manuscript.\\n\\n3. Response to \\\"Varying FOVs and View Directions\\\": The authors\\u2019 reply regarding adaptation to diverse input conditions is unsatisfactory. Accommodating user diversity is crucial for practical applications, especially for tasks such as generating panoramic videos from single-view inputs. Such considerations should be reflected in the methodological design rather than being deferred to future work or partially addressed merely by expanding training data.\\n\\nGiven these unresolved concerns, I maintain my initial score.\"}", "{\"title\": \"Clarifying our comparison with 360DVD using fair data and model setting. Reply to: Official Comment by Reviewer U9nk\", \"comment\": \"We thank the reviewer for their quick reply and kind comments about our updated results and presentation! We very much appreciate your constructive feedback. See below our reply which is also similar to the reply to Reviewer cZZr.\\n\\n\\nWe would like to clarify a few points. In our expanded OOD evaluations, we have made a fair comparison (in terms of data and base model) to 360DVD using the **original model from our paper**.\\n\\nAll of our quantitative and qualitative comparisons against 360DVD include our VideoPanda model based on **VideoLDM using the same data (WEB360)** which is a fair base model comparison. **VideoLDM is based on SD1.5** and shares a very similar structure to AnimateDiff (the base model of 360DVD).\\n\\n**For quantitative comparisons:** \\n\\nWe conducted quantitative OOD text-conditional evaluation where our base model using VideoLDM strongly outperforms 360DVD in Clip Score, FID and FVD (to HDVila3k). We have copied the quantitative comparison here for your convenience:\\n| Text conditional Vbench All Dimensions (946 prompts x 3 seeds) | Elevation=+/-60degree 8 Views | | | Horizontal 8 Views (elevation=0) | | |\\n|-----------------------------------------------------------------|:-----------------------------:|:-----------------------:|:-----------:|:--------------------------------:|:-----------------------:|:-----------:|\\n| | FID (to HDVila3k frames) | FVD (to HDVila3k video) | Clip Score | FID (to HDVila3k frames) | FVD (to HDVila3k video) | Clip Score |\\n| 360DVD (AnimateDiff + web360) | 149.7 | 901.1 | 23.39 | 127.1 | 801.9 | 27.63 |\\n| Ours (VideoLDM + web360) | **130.6** | **826.6** | **24.12** | **112.2** | **677.8** | **27.78** |\\n\\nThese additional comparisons corroborate our findings in the paper on the in-distribution text-conditional setting (Table 1) where our method also greatly outperforms 360DVD.\\n\\n\\n**For qualitative comparisons:** (https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html )\\n\\n\\nThe left column is our VideoPanda with (VideoLDM+WEB360) and the right column is 360DVD (AnimateDiff+WEB360).\\n\\n**Multiview perspective generation VS direct equirectangular generation: **\\nOne main benefit of using multiview perspective generation compared to the equirectangular format used by 360dvd, is that we can more naturally generate content in the sky and ground views (above or below +/-60 degree elevation) that typically are highly distorted in the equirectangular format. This is very visible in the OOD text-conditional samples from 360DVD when viewing the sky or ground views using the provided VR viewer. We also showed this in the appendix contained in the supplementary material (please see Appendix Figure A5). \\n\\nViewing results in VR also highlights the strange motions in 360DVD including the ground morphing in a spiral direction (0) \\u201cIn a charming Parisian caf\\u00e9, a panda\\u201d\\nor inanimate objects sliding in different directions (17) \\u201cA serene scene unfolds with a delicate porcelain teacup\\u201d.\\nSuch errors are hard to visually spot from only looking at the equirectangular projection native to 360DVD but very important for the actual task of panorama video generation.\\n\\n\\n**Superior range of capability:**\\n\\nAdditionally, 360DVD only considers the text-conditional case and doesn't consider video-conditional or autoregressive generation at all. We can handle all cases with the same base model. I think this can also be seen as a superiority in the range of capability. \\n\\n\\nWe would like to reiterate that our contributions are introducing the video-conditional panorama generation. Our multi-task training strategy enables a unified model capable of flexible conditioning during inference (see point 5c) and the randomized matrix strategy improves the quality of test time generalization to more frames with less computational demands in training (see point 5d).\\n\\n\\nPlease also note that for all of the text-conditional comparison to 360DVD **we did not increase the training data**, but kept it the same (WEB360) for fair comparison. Specifically, both the left column VideoPanda with (VideoLDM+WEB360) and middle column VideoPanda with (CogVideoX-2B+WEB360) are using WEB360 dataset.\\n\\n\\nAgain we are very very thankful for your prompt and valuable feedback and please let us know if you still have concerns and we will be happy to actively address them!\"}", "{\"title\": \"Reply to Reviewer cZZr Part 2\", \"comment\": \"> **Re: Weakness2 \\u201cIn contrast, although showing over-smooth textures, the semantics of 360DVD are more natural and the scene of content are more identifiable than the proposed method.\\u201d**\\n\\n(From point 3 in our reply to all reviewers)\\n\\nWe hope our new text-conditional OOD evaluations can help address some of the concerns about how our model compares to 360DVD. We evaluate both methods head-to-head on the VBench prompts. We want to point out that the 360DVD results are almost always not generating proper panorama videos. Additionally, we found that the prompt alignment and scene consistency for VideoPanda can all improve simply by applying VideoPanda to a more powerful base model (CogVdeoX-2B).\\n\\nOne main benefit of using multiview perspective generation compared to the equirectangular format used by 360dvd, is that we can more naturally generate content in the sky and ground views (above or below +/-60 degree elevation) that typically are highly distorted in the equirectangular format. We show this in the appendix contained in the supplementary material (please see Appendix Figure A5). It is also very visible in the OOD text-conditional samples when viewing the sky or ground views using the provided VR viewer: https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html \\n\\nSuch errors are hard to visually spot from the equirectangular projection native to 360DVD but very important for the actual task of panorama video generation.\\nPlease have a try!\\n\\nThis finding is also supported by our quantitative evaluation on the OOD prompts.\\nPlease see the comment **(7 OOD Evaluation: Quantitative Metrics)** to all reviewers for more details regarding the evaluation.\", \"we_present_the_results_in_the_table_below\": \"| Text conditional Input vid: Vbench All Dimensions (946 prompts x 3 seeds) | Elevation=+/-60degree 8 Views | | | Horizontal 8 View (elevation=0) | | |\\n|-------------------------------------------------------------------|:-----------------------------:|:------------------------:|:-----------:|:-------------------------------:|:----------------------------:|:-----------:|\\n| | FID (to MS-COCO3k) | FVD (to HDVila3k) | Clip Score | FID (to MS-COCO3k) | FVD (to HDVila3k) | Clip Score |\\n| 360DVD | 128.6 | 901.1 | 23.39 | **91.8** | 801.9 | 27.63 |\\n| Ours (VideoLDM + web360) | **115.0** | **826.6** | **24.12** | 92.6 | **677.8** | **27.78** |\\n| Ours (Cogvideo + web360) | **_93.4_** | **_675.9_** | **_25.99_** | **_74.5_** | **_624.7_** | **_29.33_** |\\n\\nNote that the original model we presented in our paper (VideoPanda using VideoVLDM base model) is clearly better than 360DVD on all metrics. VideoPanda significantly outperforms 360DVD on elevated views, highlighting its superior ability to generate the ground and sky views, which are distorted in the equirectangular representation used by 360DVD.\\nAdditionally, the superior performance gained by using the CogVideoX-2B base model with VideoPanda is supported by improvement across all metrics.\\n\\n**> Re: Weakness3 \\u201cThe comparative examples with 360DVD are almost static scene.\\u201d \\u201cIt is highly required to evaluate on cases with moving objects (such as \\\"moving car on the street\\\", \\\"astronaut riding a horse on grass\\\", etc.)\\u201d**\\n\\n(Please also see point 1 and 3 in our reply to all reviewers)\\n\\nWe have now included the entire VBench suite for our OOD text-conditional experiments and it contains many prompts with dynamic objects. Please see the qualitative comparisons here:\", \"https\": \"//mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html.\\nWe do find that the generations from our model with the VideoLDM base model tends to generate videos with lower amount of motion. However, this is related to the base model and VideoPanda on CogVideoX-2B trained on the same data is able to generate scenes with fairly dynamic content.\\n\\nAs 360DVD conditions on optical flow, it can directly force some motions in the scene but the motions can often be incoherent such as objects morphing in and out of existence.\\nViewing results in VR also highlights other strange motions in 360DVD including the ground morphing in a spiral direction (0) \\u201cIn a charming Parisian caf\\u00e9, a panda\\u201d\\nor inanimate objects sliding in different directions (17) \\u201cA serene scene unfolds with a delicate porcelain teacup\\u201d when comparing VideoPanda (left column) vs 360DVD (right column).\"}", "{\"comment\": \"**5 Motivation and Novelty**\\n\\nAs we mentioned in the paper, our main contributions are as follows:\", \"a\": \"Propose Video-conditional Panorama Video Generation Task\\n\\nWe are the first to introduce the video conditional multiview generation task on open domain. We believe the capability of a model to synthesize multiple views of a dynamic scene is key to address many important problems in vision, graphics and robotics. To support this goal, panoramic videos are a very promising yet underutilized source of true multiview spatiotemporal data that contains dynamic objects.\\n\\nThere are many existing tools for forming image panoramas of static scenes from image collections or videos captured by handheld cameras and phones. These collate and compose the images. However, this approach is not applicable for producing panorama videos of dynamic scenes as the different views will be temporally inconsistent with each other. To capture panoramic videos, customized camera rigs need to be used.\\n\\nGenerating panoramas conditional on input videos is hence an important step to democratize the creation of panoramic content. Such content can be used for creating VR experiences and as backgrounds for visual effects and other graphics applications.\\nAnother interesting use case is scene reconstruction, where diffusion models have made recent contributions to supply high fidelity priors to resolve inherent ambiguities. Multiview image modeling capabilities for this use case are being enhanced by leveraging pretrained video models but are mostly still restricted to static scene cases.\\nSuch capabilities may also aid robotic applications where we may wish to playback a recorded sequence but allow the agent to modify its behavior such as viewing direction to some extent.\", \"b\": \"Using Multiview-Attention for the Panorama Video Generation\\n\\nThe multi-view attention mechanism we use is not inherently different from existing ones like MVDream or CAT3D, but we introduce it to the video panorama generation setting and show we can get good results whilst keeping the multiview-attention to operate only per-frame and let the rest of the layers keep interpreting each full video separately.\", \"c\": \"Multi-Task Training for Flexible Conditioning\\n\\nWe introduce multi-task training to enable a single model to handle text-conditioning, video-conditioning and autoregression which saves training time and is more convenient. While dropping conditioning during training is a common technique as is used in models like GLIDE, Imagen, and SVD, It is mostly done to enable classifier-free guidance rather than flexible conditioning for a single model.\\nThe use of joint video and text conditioning in our model is similar to other video outpainting settings and also related to the task of Instruct-Pix2Pix. In our case, our single model is able to handle full generation from text as well as conditional on a video, and autoregression.\", \"d\": \"Randomized Matrix Training Strategy for Longer Temporal Horizon\\n\\nWe introduced the randomized matrix training strategy that improves the model\\u2019s ability to generate full multiview videos of longer time horizons at test time for a given computational budget during training.\\n\\nOur random matrix strategy is explained in line 476 in the main paper. We wish to generate 8 views with 16 frames but because we are generating more views than single view training, it can not fit in GPU memory and also slows down training. We could still train with 8 views and 6 frames instead which does fit in memory. We call this strategy \\u201cfixed matrix\\u201d. Although simple, the mismatch in training frames (6) vs inference number of frames (16) results in blurry outputs which we show in the top row of Figure 7. We apologize for an oversight in the caption of Figure 7. It should say \\u201cfixed matrix\\u201d rather than \\u201cfull matrix\\u201d. \\n\\nInstead of the reduced fixed matrix training, we notice that one can still fit 3 views at 16 frames. By randomizing which 3 out of the 8 views we choose and randomizing other combinations that can fit in memory such as 4 views at 12 frames, we can strengthen the models ability to generalize to sampling more frames at test time. This strategy is called \\u201crandom matrix\\u201d and its improvement is illustrated in the bottom row of Figure 7. We quantitatively evaluated this strategy on the in-distribution video-conditional task in Table 3, where all quality metrics improve quite a bit but reconstruction PSNR is slightly lower which could be related to slightly better global color consistency for the fixed matrix model that is sees more training iterations featuring all views.\\n\\nNote that this technique is still applicable for extending the horizon even with larger compute and memory budget (for example even if we use Context Parallel to train with more frames, it doesn't make random matrix useless as it can still improve the performance when the test time temporal window is extended even longer for the same compute cost).\", \"title\": \"General response, continued: 5 Motivation and Novelty\"}", "{\"summary\": \"This paper presents a novel method for long panoramic video generation from a text prompt or a perspective video. Different from existing 360-DVD that generates equi-rectangular panorama video directly, it builds on existing video diffusion models by adding multi-view attention layers to generate consistent multi-view outputs. It is expect to help maintain the capability of the pre-trained model without domain gap hindering. Extensive experimental results tells the superiority of the proposed method, especially the quantitative evaluation and user study. Anyhow, it still suffers from obvious visual artifacts, as shown in the demonstrated video results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper is well written and easy to follow. Every design is well-motivated and clear clarified.\", \"The key idea to formulate panorama video generation as multi-view video generation makes sense and novel. Quantiative results (i.e. Table 1) evidence the superiority over existing baseline method 360DVD.\", \"The experimental evaluation is well-conducted and sufficient. Multiple metrics are adopted and user study is performed.\"], \"weaknesses\": \"-The major weakness is that most of the generated panorama videos are not good as expected, which renders the importance of the key technical innovation not well supported.\\nFirst, most of the results present ambiguous semanic structure or broken scene, such as the [autoregressive generation] showcase \\\"anime girl standing on a boat\\\", and the [video-conditioned with different prompts] showcase \\\"A view of the seashore with the sea coming up against a cliff.\\u201c This is a unignorable weakness about the performance.\\nBesides, almost all results present obvious seamline across views.\\nIt seems the newly introduced multi-view attention does not work as expected. A possible attempt is jointly finetuning the base model with LORA, which may help the model better adapt to the panorama video distribution.\\n\\n-As stated above, some of results suffers from broken content and semantic ambiguity. In contrast, although showing over-smooth textures, the semantics of 360DVD are more natural and the scene of content are more identifiable than the proposed method. This impedes the justification of the proposed method is better than existing 360DVD.\\n\\n-The comparative examples with 360DVD are almost static scene, which makes the evaluation less convincing. It is highly required to evaluate on cases with moving objects (such as \\\"moving car on the street\\\", \\\"astronaut riding a horse on grass\\\", etc.), because the consistency of dynamic objects is one of the major focus in video generation task.\", \"questions\": [\"How many samples are used for the evaluation in Table 2? How do you collected the prompts? Do they cover good diversity?\", \"It is hard to understand why multi-task training is better than single task training. This is contrast with some common feeling, for example all the T2V or I2V modes are processed with fixed frames, because varied number of tokens in the attention computation may hinder the performance on a specific frames and resolution. I would like to see more elaboration over this.\", \"The artifacts of full matrix shown in Figure. 7, actually also happen in the result of the proposed method (i.e. random matrix), such as in the [video-conditioned generation] showcase \\\"A view of a panda's face peeking from behind a tree branch amidst lush green foliage\\u201c.\", \"So it seems both setting suffers from the over blurry and structure mixture artifacts.\", \"According to the paper, the model is finetuned with the base model layers frozen. In my understanding, the distribution of each view of the panorama still deviate with the original real-world image. Would it be helpful to the generation quality (such as the ambiguous/broken scene semantics) if the base model is tuned with LORA?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to reviewer U9nk Part1\", \"comment\": \"We thank the reviewer for their insightful comments and suggestions.\", \"we_would_also_like_to_point_the_reviewer_to_the_additional_results_and_evaluations_we_have_prepared_for_the_rebuttal_and_presented_in_the_summary_response_to_all_reviewers_above_including\": \"1 Expanding and Improving OOD Evaluations\\n\\n2 Results on new base model (CogVideoX-2B)\", \"7_ood_evaluation\": \"Quantitative Metrics\", \"updated_qualitative_results_can_be_seen_on_our_updated_webpages_below\": \"\", \"ood_text_cond_vbench_prompts\": \"https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\", \"ood_col_cond_ai_generated_videos\": \"https://mvpanovideo.github.io/VideoPanda/release_col_cond_comp/release_col_cond_comp/index.html\", \"we_address_each_of_the_points_mentioned_in_the_review_below\": \"> **Re: Weakness1 \\u201cEssentially, this paper can be seen as an application combining SVD and MVDiffusion, with the spatial and temporal blocks derived from SVD and the multi-view block from MVDiffusion.\\u201d \\u201cit would be beneficial for the authors to explain how their approach differs from these methods and whether they have made specific design choices to address challenges unique to this application\\u201d**\\n\\nPlease see point 5 in our response to all reviewers in which we extensively comment about the novelty of our work and relate them to similar works. \\n\\nWe would also like to highlight our multi-task training strategy for letting the same model handle flexible conditioning (see point 5c) and the randomized matrix strategy that shows we can greatly improve the quality of test time generalization to longer sequences with less computational demands (see point 5d).\\n\\n\\n\\n> **Re: Weakness1 \\u201cRegarding the training strategy, the proposed random matrix strategy is essentially a compromise due to limited computational resources; theoretically, using more views or frames in training would yield better results. From the experimental results in Table 3, it can be seen that improvements in FID and FVD scores are achieved at the cost of PSNR (rows 2 and 3)\\u201d**\\n\\n(Please also see point 5D in our response to all reviewers)\\n\\nWe agree with the reviewer, as we state in the paper, line 476 the random matrix strategy is a technique to be able to handle larger time horizons for a given compute constraint. This improves the test-time generalization we observe from our model being able to handle longer time horizons.\\nPSNR decreases a bit but it is also not a perfect metric for the generative setting where the solution space is inherently ambiguous and a mode covering solution may obtain a better PSNR than a plausible looking generative distribution of samples. We hypothesize based on the qualitative comparison result in Figure 7 in the paper that the effect of random matrix is to greatly improve the visual fidelity of the generation (bottom row) whereas fixed matrix training when extended in horizon at test time generates oversmooth content in the unseen views (top row). This is also consistent with the improvements in FID and FVD, we see.\\n\\n\\n> **Re: Weakness2 and Question1 (Multi-task strategy) \\u201cAs for the multi-task strategy [...] it would be helpful for the authors to clarify how their approach differs from these methods to demonstrate the novelty of their method\\u201d, \\u201cIn Table 3, after applying the multi-task training strategy, several metrics, such as FVD and PSNR, show a decline. Could the authors provide an appropriate explanation for this?\\u201d**\\n\\n(Please also see point 5C in our response to all reviewers)\\n\\nFor multi-task training, the purpose is to enable the same model to flexibly handle text, image and autoregressive generation. Such a model has more capabilities than any single model and is more expedient than training a separate model for each setting. Quality wise the metrics in Table 3 do decline a bit but they remain very similar. Indicating that we can use one model for all tasks while retaining a similar level of quality on an individual task.\\n\\n\\n**>Re: Question2 \\u201cIn section 4.6, the authors tested autoregressive long video generation, producing videos up to 61 frames. Have the authors attempted to generate even longer videos? As video length increases, does generation quality continuously degrade? If so, what feasible strategies might help mitigate this issue?\\u201d**\\n\\nThanks for the suggestion and it is an interesting exploration to consider. We are working on this result and will share it shortly!\"}", "{\"title\": \"Reply to Reviewer EySz Part1\", \"comment\": \"We thank the reviewer for their insightful comments and suggestions. We address each of the points mentioned in the review below:\\n\\n> **Re: Weakness1 \\u201cPoor qualitative results. I feel that the overfitting is evident in the effect\\u201d**\\n\\n(Please also see point 1&2 in our reply to all reviewers)\\n\\nTo address these concerns we have greatly expanded and improved our OOD evaluations for the text-conditional setting and show many more qualitative examples now for both settings on our updated websites. We have also trained a version of VideoPanda using the more powerful base model CogVideoX-2B and have found greatly improved generalization and visual quality.\\n\\n**Video condition:** We would like to point out that in the video-conditional setting, our model demonstrates a stronger degree of generalization (beyond the WEB360 domain) as we have demonstrated with evaluation on the out of distribution videos. To further support this, we have expanded our OOD evaluation for the video-conditional setting and show many more examples on our updated website.\", \"please_check_some_qualitative_results_from_our_ood_video_cond_ai_generated_videos_https\": \"//mvpanovideo.github.io/VideoPanda/release_col_cond_comp/release_col_cond_comp/index.html\\n\\nThese OOD evaluation videos were sourced from generated samples from SORA/Runway/Luma as downloaded from their respective websites. In total there are 50 videos used for the evaluation. We will add these additional details regarding the selection process of the evaluation set in the supplementary materials.\\n\\n\\n**Text condition:**\\n\\nOur text-conditional OOD evaluations are now using the full VBench suite of prompts (1700+ prompts) with great diversity and coverage of different dimensions and categories. In the text-conditional case, bad results for OOD prompts are caused in part by our base model not understanding the prompt. We found that switching to a stronger base model (CogVideoX-2B) simply fixes it.\", \"please_check_some_qualitative_results_from_our_ood_text_cond_vbench_prompts\": \"https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\\n\\nWe have also computed quantitative metrics (Clip Score, FID and FVD to HDVila videos) for our OOD text-conditional evaluation on VBench prompts. Please see our comment to all reviewers \\u201c**7 OOD Evaluation: Quantitative Metrics**\\u201d.\\n\\n\\n> **Re: Weakness1 \\u201cThe watermarks are being generated, and on closer inspection you can see the airpano.\\u201d**\\n\\n(Please see point 6 in our reply to all reviewers)\\n\\nIt is not necessarily the case that generating a watermark indicates that such samples can\\u2019t be different from WEB360. For example, quite a few of our OOD text-conditional samples contain the watermark despite being drastically different from content present in WEB360.\", \"just_to_name_some\": \"\\u201c0) In a charming Parisian caf\\u00e9, a panda sits\\u2026\\u201d and \\u201c1) A joyful Corgi with a fluffy coat and expressive eyes\\u201d that can be seen here: https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\", \"note_that_there_are_also_some_examples_of_our_model_not_generating_the_watermark_such_as\": \"\\u201c(7) A vibrant individual, dressed in a colorful outfit with a..\\u201d and left column of *8) A serene individual, dressed in a flowing white shirt and dark trousers\\u201d and \\u201c(19) A focused individual, wearing a dark apron over a white shirt, stands\\u201d. The results look reasonable in these cases.\\n\\nThis could be solved by careful data filtering, or some other techniques (like the DomainAdapter of AnimateDiff [Guo et al., 2023]) to remove the watermark. But this is not the main focus of this paper.\\n\\n\\n> **Re: Weakness2 \\u201cSome of the diagrams in the paper don't have good consistency when zoomed in.\\u201d**\\n\\nWe thank the reviewer for bringing up this concern. Are you referring to the panorama results?\\n\\nPart of the seams between view boundaries were a result of our suboptimal stitching code for forming the panorama from the multiple views, we adopt the feathering approach used in the MVDiffusion implementation that smoothly blends images based on distance to image center which removes many of the more subtle seamlines that were apparent before.\\n\\nIn some OOD cases, the model could still exhibit some less subtle inconsistencies between views but we show that these reduce when using a stronger base model (CogVideoX-2B).\", \"please_refer_to_our_new_and_greatly_expanded_ood_evaluation_video_samples_in_both_the_video_conditional_and_text_conditional_cases\": \"\", \"ood_video_cond_ai_generated_videos\": \"https://mvpanovideo.github.io/VideoPanda/release_col_cond_comp/release_col_cond_comp/index.html\", \"ood_text_cond_vbench_prompts\": \"https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\"}", "{\"summary\": \"This paper introduces a method named VideoPanda, designed to synthesize 360-degree videos from text prompts or single-view videos. VideoPanda builds on existing video diffusion models by adding multi-view attention layers to produce consistent multi-view outputs. Both quantitative and qualitative results are presented.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The concept of expanding single-view videos into panoramic videos is interesting. Some of the generated results appear visually good.\", \"weaknesses\": \"1. The model appears to be overfitted to the WEB360 dataset. The test videos are directly sourced from the WEB360 dataset or selected from similar scenes on the airpano channel (e.g., the pandas in \\\"100329\\\" from the WEB360 dataset, and the ice and mountains in \\\"100666\\\"). The results generated by the method often contain the airpano watermark, whereas the few results without watermarks on the webpage exhibit strange artifacts in other regions. This indicates a lack of generalization to real-world scenarios.\\n2. The authors state, \\\"Since the out-of-distribution condition inputs do not originate from 360 videos, we cannot compute metrics that require ground truth images, such as pairwise FVD.\\\" However, to the best of my knowledge, FID and FVD do not necessarily require paired prediction-ground truth data. FID and FVD are distribution-matching metrics and do not require one-to-one correspondence between generated and real data.\\n3. There are no ablation studies on the multi-view attention mechanism. The paper does not clearly explain the differences between the proposed multi-view attention and existing methods, such as MVDream.\\n4. The paper lacks experiments on conditions with varying fields of view (FOV) and view directions. The results only demonstrate conditioning using a 0-degree latitude image. In practical scenarios, adapting to different FOVs and viewing angles is a common requirement.\", \"questions\": \"1. Can VideoPanda generate videos conditioned on different FOVs and view directions?\\n2. What is the specific implementation of the multi-view attention, and how does it differ from existing methods?\\n3. Can the authors provide more quantitative and qualitative results without watermarks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**6 Watermark**\\n\\nGeneration of watermarks are a dataset bias. When training on web360 (almost all the training videos have the airpano watermark), it\\u2019s expected that the model will generate the same watermark in inference.\\n\\nIt is not necessarily the case that generating a watermark indicates that such samples can\\u2019t be different from WEB360. For example, quite a few of our OOD text-conditional samples contain the watermark despite being drastically different from content present in WEB360.\", \"just_to_name_some\": \"\\u201c0) In a charming Parisian caf\\u00e9, a panda sits\\u2026\\u201d and \\u201c1) A joyful Corgi with a fluffy coat and expressive eyes\\u201d that can be seen here: https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\\n\\nOn our newly collected dataset used to support the handling of camera elevation variations (see point 4 above), we notice there are other types of watermarks typically appearing on the bottom views. Also a lot of these videos are shot from a camera held by a person, mounted on their head or mounted on a vehicle of some sort, which is often also adapted by the model. This could be solved by careful data filtering, or some other techniques (like the DomainAdapter of AnimateDiff [Guo et al., 2023]) to remove the watermark. But this is not the main focus of this paper.\", \"title\": \"General response, continued: 5 Watermark\"}", "{\"title\": \"Thanks to the author for the reply\", \"comment\": \"After seeing the updated methods and examples again, I raised my rating.\"}", "{\"title\": \"Reply to Reviewer 2\", \"comment\": \"We thank the reviewer for their insightful comments and suggestions. We address each of the points mentioned in the review below:\\n\\n\\n> **Re: Weakness1 \\u201cFirst, most of the results present ambiguous semanic structure or broken scene, such as the [autoregressive generation] showcase \\\"anime girl standing on a boat\\\", and the [video-conditioned with different prompts] showcase \\\"A view of the seashore with the sea coming up against a cliff.\\u201d**\\n\\n(Please also see point 1,2 and 4 in our reply to all reviewers)\\n\\n**Video-conditional case**: our model does sometimes exhibit imperfect semantic structure. These cases are generally heavily out of distribution examples and it performs better in more typical cases. We have added many more qualitative examples of our OOD video-conditional generation to our new website page here: https://mvpanovideo.github.io/VideoPanda/release_col_cond_comp/release_col_cond_comp/index.html \\n\\nWe note that this is a data issue rather than a fundamental issue in our model. \\n\\nWe have trained our same model with more panorama video data we collect that contain more varied elevations whereas WEB360 is heavily biased towards zero elevation. The additional data helps for videos with non-zero elevations as can be seen when comparing the left and middle column of (7) \\u201cA view of the coastal town, the historic stone structure with a dome, the terraced buildings, and the vast blue sea.\\u201d and (8) \\u201cA view of a rugged off-road vehicle driving.\\u201d show extreme examples of non-standard elevation.\\n\\nFurther, we found that using a more powerful base model, CogVideoX-2B can greatly improve the handling of OOD content which we show on the same website in the rightmost column: VideoPanda (CogVideoX-2B + NewData)\\nIn particular (5) \\u201cA Japanese animated film of a young woman standing on a ship and looking back at camera\\u201c features an extreme closeup of the face. The new model based on CogVideo understands the concept of placing the scene on top of a boat better.\\n\\n**Text-conditional case**: We would also like to highlight that we have greatly expanded our text-conditional OOD evaluation and use all VBench prompts, presenting many qualitative results here: https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\\n\\nDespite not being perfect, our model generally creates scenes that are proper panoramas whereas 360DVD features heavy distortions in sky and ground views. Our model is also generally better aligned to the prompt. We corroborate these qualitative findings with our new quantitative evaluations on these OOD cases where we perform much better on Clip Score and FID/FVD to HDVila videos. Please see comment **(7 OOD Evaluation: Quantitative Metrics)** to all viewers above for specifics.\\n\\n\\n> **Re: Weakness1 \\u201cBesides, almost all results present obvious seamline across views. It seems the newly introduced multi-view attention does not work as expected.\\u201d**\\n\\nPart of the seams were a result of our suboptimal stitching code for forming the panorama from the multiple views, we adopt the feathering approach used in the MVDiffusion implementation that smoothly blends images based on distance to image center which removes many of the more subtle seamlines that were apparent before.\\n\\nIn some OOD cases, the model could still exhibit some less subtle inconsistencies between views but we show that these reduce when using a stronger base model (CogVideoX-2B).\", \"please_refer_to_our_new_and_greatly_expanded_ood_evaluation_video_samples_in_both_the_video_conditional_and_text_conditional_cases\": \"\", \"ood_video_cond_ai_generated_videos\": \"https://mvpanovideo.github.io/VideoPanda/release_col_cond_comp/release_col_cond_comp/index.html\", \"ood_text_cond_vbench_prompts\": \"https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\\n\\n\\n>**Re: Weakness1 and Question4 \\u201cA possible attempt is jointly finetuning the base model with LORA, which may help the model better adapt to the panorama video distribution.\\u201d \\u201cWould it be helpful to the generation quality (such as the ambiguous/broken scene semantics) if the base model is tuned with LORA?\\u201d**\\n\\nThank you for the great suggestion. We note that we are currently training our CogVideo model without any freezing. We found that the more powerful base model is better able to preserve its general knowledge even without needing freezing. However, it could be further improved by using a combination of freezing and LoRA as you suggested We are running this experiment now and excited to report on it as soon as we have ablated this. Thanks again for the suggestion!\"}", "{\"title\": \"General response, continued: Non-standard Camera Elevation Input\", \"comment\": \"**3 Comparison to 360DVD**\\n\\nWe hope our text-conditional OOD evaluations can help address some of the concerns about how our model compares to 360DVD. We evaluate both methods head-to-head on the VBench prompts. We want to point out that the 360DVD results are almost always not generating proper panorama videos.\\n\\nOne main benefit of using multiview perspective generation compared to the equirectangular format used by 360dvd, is that we can more naturally generate content in the sky and ground views (above or below +/-60 degree elevation) that typically are highly distorted in the equirectangular format. We show this in the appendix contained in the supplementary material (please see Appendix Figure A5). It is also very visible in the OOD text-conditional samples when viewing the sky or ground views using the provided VR viewer: https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html \\n\\nViewing results this way also highlights the strange motions in 360DVD including the ground morphing in a spiral direction (0) \\u201cIn a charming Parisian caf\\u00e9, a panda\\u201d\\nor inanimate objects sliding in different directions (17) \\u201cA serene scene unfolds with a delicate porcelain teacup\\u201d when comparing VideoPanda (left column) vs 360DVD (rightmost column) here: https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html.\\n\\nSuch errors are hard to visually spot from the equirectangular projection native to 360DVD but very important for the actual task of panorama video generation.\\nPlease have a try!\\n\\n\\n\\n**4 Video-conditional setting: Handling variations in camera elevation in input video**\\n\\nEven with our current model, it is able to handle camera variation to some extent due to a few natural elevation changes in the training videos.\\n\\nHowever WEB360 contains very limited elevation changes in the training dataset, usually showing smooth landscape fly-overs. \\nWe hence collect a new dataset of panorama videos and show that it can enhance the models understanding of variations in elevation.\\nFurthermore, similarly to the text-conditional setting, we also trained VideoPanda using the more powerful CogVideoX-2B base model which boosts the visual quality and enhances the handling of OOD content further.\\nWe highlight these new results in the OOD Video-conditional results here (in particular, on the left column is the result from the model in our paper, the middle column is the same model but trained with the new data instead of web360 and the right column is further switching the base model to CogVideoX-2B): https://mvpanovideo.github.io/VideoPanda/release_col_cond_comp/release_col_cond_comp/index.html \\n\\nThe additional data helps the understanding of different elevations as can be seen in for example (7) \\u201cA view of the coastal town, the historic stone structure with a dome, the terraced buildings, and the vast blue sea.\\u201d and (8) \\u201cA view of a rugged off-road vehicle driving.\\u201d show extreme examples of non-standard elevation.\"}", "{\"title\": \"Thank you for the quick reply!\", \"comment\": \"We thank the reviewer for their quick reply and checking our updated results and evaluations!\\n\\nShould you have any other concerns or questions, please feel free to let us know and we will be happy to actively address them!\"}", "{\"title\": \"Reply to Reviewer EySz Part2\", \"comment\": \"> **Re: Weakness3 \\u201cThe technology is low-innovative, and multi-view concerns are common in the 3D AIGC field[1].\\u201d**\\n\\n(Please also see point 5 in our reply to all reviewers)\\n\\nWhile, we agree that there are many works in the 3D AIGC field tackling multi-view generation, we are the first to tackle the video-conditional panorama video generation task and applying the multiview-attention mechanisms to improve video panorama generation. We believe the capability of a model to synthesize multiple views of a dynamic scene is key to address many important problems in vision, graphics and robotics. To support this goal, panoramic videos are a very promising yet underutilized source of true multiview spatiotemporal data that contains dynamic objects.\\n\\nWe would also like to highlight our multi-task training strategy for letting the same model handle flexible conditioning (see point 5c) and the randomized matrix strategy that shows we can greatly improve the quality of test time generalization to longer sequences with less computational demands (see point 5d).\"}", "{\"title\": \"Response to reviewer U9nk Part2\", \"comment\": \">**Re: Question3 \\u201cHowever, 360DVD uses the text-to-image model SD1.5, while the proposed method uses the SVD video generation model, which inherently offers an advantage in video smoothness. Did the authors conduct a relatively fair comparison, such as applying SVD to 360DVD or using SD1.5 with the proposed method?\\u201d**\\n\\n(Please see point 2&3 in our response to all reviewers)\\n\\nTo clarify we did not use SVD as the base model for our experiments.\\nIn the main paper we used the VideoLDM model (not SVD) which was presented in the work Align your Latents [Blattmann et al., 2023a)] as cited in our paper. This model is based on SD1.5 and is quite similar in design to AnimateDiff.\\n\\nOne main benefit of using multiview perspective generation compared to the equirectangular format used by 360dvd, is that we can more naturally generate content in the sky and ground views (above or below +/-60 degree elevation) that typically are highly distorted in the equirectangular format. We show this in the appendix contained in the supplementary material (please see Appendix Figure A5). It is also very visible in the OOD text-conditional samples when viewing the sky or ground views using the provided VR viewer: https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\"}", "{\"title\": \"Reply to Reviewer TXRj Part 2\", \"comment\": \"> **Re: Weakness1 and Question 3 \\u201cResults generated by the method often contain the airpano watermark, whereas the few results without watermarks on the webpage exhibit strange artifacts in other regions.\\u201d \\u201cCan the authors provide more quantitative and qualitative results without watermarks?\\u201d**\\n\\n(Please see point 6 in our reply to all reviewers)\\n\\nIt is not necessarily the case that generating a watermark indicates that such samples can\\u2019t be different from WEB360. For example, quite a few of our OOD text-conditional samples contain the watermark despite being drastically different from content present in WEB360.\", \"just_to_name_some\": \"\\u201c0) In a charming Parisian caf\\u00e9, a panda sits\\u2026\\u201d and \\u201c1) A joyful Corgi with a fluffy coat and expressive eyes\\u201d that can be seen here: https://mvpanovideo.github.io/VideoPanda/release_text_cond_comp/release_text_cond_comp/index.html\", \"note_that_there_are_also_some_examples_of_our_model_not_generating_the_watermark_such_as\": \"\\u201c(7) A vibrant individual, dressed in a colorful outfit with a..\\u201d and left column of *8) A serene individual, dressed in a flowing white shirt and dark trousers\\u201d and \\u201c(19) A focused individual, wearing a dark apron over a white shirt, stands\\u201d. The results look reasonable in these cases.\\n\\nThis could be solved by careful data filtering, or some other techniques (like the DomainAdapter of AnimateDiff [Guo et al., 2023]) to remove the watermark. But this is not the main focus of this paper.\\n\\n\\n> **Re: Weakness2 \\u201cFID and FVD do not necessarily require paired prediction-ground truth data\\u201d**\\n\\n(From point 7 in our reply to all reviewers)\\n\\nWe agree that FID/FVD are distributional measures and thank the reviewer for their suggestion. We now compute FID and FVD for the expanded OOD text-conditional evaluations using the VBench prompts. Since these prompts can be fantastical, there are no ground truth video set that accompany them, instead we use the popular image dataset MS-COCO for FID reference set and video dataset HDVila for our FVD reference set.\\nPlease see the comment **(7 OOD Evaluation: Quantitative Metrics)** to all reviewers for more details regarding the evaluation.\", \"we_present_the_results_in_the_table_below\": \"| Text conditional Vbench All Dimensions (946 prompts x 3 seeds) | Elevation=+/-60degree 8 Views | | | Horizontal 8 Views (elevation=0) | | |\\n|-----------------------------------------------------------------|:-----------------------------:|:-----------------------:|:-----------:|:--------------------------------:|:-----------------------:|:-----------:|\\n| | FID (to HDVila3k frames) | FVD (to HDVila3k video) | Clip Score | FID (to HDVila3k frames) | FVD (to HDVila3k video) | Clip Score |\\n| 360DVD | 149.7 | 901.1 | 23.39 | 127.1 | 801.9 | 27.63 |\\n| Ours (VideoLDM + web360) | **130.6** | **826.6** | **24.12** | **112.2** | **677.8** | **27.78** |\\n| Ours (Cogvideo + web360) | **_109.9_** | **_675.9_** | **_25.99_** | **_97.2_** | **_624.7_** | **_29.33_** |\\n\\nNote that the original model we presented in our paper (VideoPanda using VideoVLDM base model) is better than 360DVD on all metrics.\\nVideoPanda significantly outperforms 360DVD in the 60 degree elevation views, highlighting its superior ability to generate the ground and sky views, which are distorted in the equirectangular representation used by 360DVD.\\nAdditionally, the superior performance gained by using the CogVideoX-2B base model with VideoPanda is clearly seen by the quantitative evaluation above, with massive improvements observed across all metrics.\\n\\n\\n> **Re: Weakness3 and Question3 \\u201cThere are no ablation studies on the multi-view attention mechanism. The paper does not clearly explain the differences between the proposed multi-view attention and existing methods, such as MVDream.\\u201d \\u201cThere are no ablation studies on the multi-view attention mechanism\\u201d**\\n\\n\\n(Please also see point 5B in our reply to all reviewers)\\n\\nWe describe our multi-view attention mechanism in Section 3.1, starting at line 190. There is no inherent difference to multi-view attention mechanisms used by other works such as Cat3D [Gao* et al., 2024] and MVDream [Shi et al., 2023b)]. In the paper, we do not claim the multi-view attention mechanism as a novel contribution. Instead we are the first to utilize it for the generation of panoramic videos and natural handling of input video conditioning.\\n\\nPlease let us know if you would like us to try any specific ablations. Thanks!\"}", "{\"comment\": \"Thank you to the authors for their detailed response. The newly added experiments, including the use of a stronger video generation model (CogVideoX) and training with more data, have indeed improved the quality of panoramic video generation. However, my two primary concerns remain unresolved:\\n\\n1. **Method Novelty**: The authors acknowledge that their multi-view attention is essentially the same as that in MVDream, arguing that applying it to panoramic video generation constitutes novelty. However, in my view, generating panoramic videos is very similar to multi-view video generation. A single frame of a panoramic video is essentially a composite of images from multiple views, making it fundamentally a multi-view image.\\n\\n2. **Fair Comparisons**: As Reviewer cZZr also pointed out, the authors use more powerful video generation models like VideoLDM/CogVideoX as their base, whereas the comparison method (360DVD) uses the image-generation model SD1.5. This inherently disadvantages 360DVD in terms of temporal consistency and motion smoothness. As such, the comparison is relatively unfair, making it difficult to determine whether the performance gains are due to the proposed method itself or the superior capability of the base generation model.\"}", "{\"title\": \"Reply to Reviewer TXRj Part 3\", \"comment\": \"> **Re: Weakness4 and Question1 \\u201cThe paper lacks experiments on conditions with varying fields of view (FOV) and view directions.\\u201d\\u201cCan VideoPanda generate videos conditioned on different FOVs and view directions?\\u201d**\\n\\n(From point 4 in our reply to all reviewers)\\n\\nWe agree that handling varying FOV and view elevations is an interesting and important problem for wider practical application. \\nEven with our current model, it is able to handle camera variation to some extent due to natural elevation changes in the training videos. Although WEB360 contains very limited elevation changes, the new dataset we collect contains more and we highlight examples of these cases in the video-conditional setting here: https://mvpanovideo.github.io/VideoPanda/release_col_cond_comp/release_col_cond_comp/index.html \\n\\nThe additional data helps the understanding of different elevations as can be seen in the middle column for example (7) \\u201cA view of the coastal town, the historic stone structure with a dome, the terraced buildings, and the vast blue sea.\\u201d and (8) \\u201cA view of a rugged off-road vehicle driving.\\u201d show extreme examples of non-standard elevation. Using the CogVideoX-2B as base model also further boosts the visual quality and handling of OOD content.\\n\\nThat being said, handling of other FOV is approximate and extreme elevation changes can also be confusing to our model.\\nWe also commented on this limitation in the conclusion of the main paper, line 529: To handle such extreme camera variations, we could combine our method with CamFreeDiff [(Yuan et al., 2024)] which has been applied in the image panorama generation setting. We leave this important exploration to future work.\"}" ] }
CMMpcs9prj
Towards Faster Decentralized Stochastic Optimization with Communication Compression
[ "Rustem Islamov", "Yuan Gao", "Sebastian U Stich" ]
Communication efficiency has garnered significant attention as it is considered the main bottleneck for large-scale decentralized Machine Learning applications in distributed and federated settings. In this regime, clients are restricted to transmitting small amounts of compressed information to their neighbors over a communication graph. Numerous endeavors have been made to address this challenging problem by developing algorithms with compressed communication for decentralized non-convex optimization problems. Despite considerable efforts, current theoretical understandings of the problem are still very limited, and existing algorithms all suffer from various limitations. In particular, these algorithms typically rely on strong, and often infeasible assumptions such as bounded data heterogeneity or require large batch access while failing to achieve linear speedup with the number of clients. In this paper, we introduce MoTEF, a novel approach that integrates communication compression with $\textbf{Mo}$mentum $\textbf{T}$racking and $\textbf{E}$rror $\textbf{F}$eedback. MoTEF is the first algorithm to achieve an asymptotic rate matching that of distributed SGD under arbitrary data heterogeneity, hence resolving a long-standing theoretical obstacle in decentralized optimization with compressed communication. We provide numerical experiments to validate our theoretical findings and confirm the practical superiority of MoTEF.
[ "Optimization", "Decentralized Learning", "Federated Learning", "Communication Compression" ]
Accept (Poster)
https://openreview.net/pdf?id=CMMpcs9prj
https://openreview.net/forum?id=CMMpcs9prj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zrS0Vdyg3m", "zViUk0BQpX", "tT2sz3Csc3", "rD24gE8fGO", "p2zHkGkCUJ", "nAeX7wpO1y", "mHvVqDqfzJ", "jcYvRmJ9TF", "iziii1kOzY", "ipKPC8M5gc", "icQuoeA9sf", "YEawXzs66Z", "Y1rRwGhkLe", "Xvl26jVaJk", "V0LYAPDifB", "UIb08DMVUh", "JgzPAzmSWQ", "IJFZOJateM", "HpXIecGb7n", "HVO02XjcR9", "Grc5ii4y5R", "GWPn6yzpdy", "DWySD7LA8o", "CyXBGH5ulc", "C8i7shrMvn", "83fzbv6a6G", "7zNwViZEf0", "5a2sThLQhk", "2P2pgbUww8" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732643275493, 1732052588915, 1733134784689, 1730031745413, 1730262155288, 1732051754481, 1732051136524, 1732311271596, 1732311248523, 1732453844031, 1732386708144, 1730703898445, 1733803439049, 1732603195362, 1732329470060, 1732051987568, 1732311231266, 1737524273314, 1732051300720, 1732617598471, 1730479965708, 1732574206882, 1733302349169, 1730216466822, 1732181658299, 1732051638298, 1732052189070, 1732311218167, 1732436943597 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_rAXz" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_TEjT" ], [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_SzmB" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_SzmB" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_rAXz" ], [ "ICLR.cc/2025/Conference/Submission13638/Area_Chair_QDAc" ], [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_DwbG" ], [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_SzmB" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_DwbG" ], [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_DZJC" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_DZJC" ], [ "ICLR.cc/2025/Conference/Submission13638/Reviewer_TEjT" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ], [ "ICLR.cc/2025/Conference/Submission13638/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I apologize for the delayed response and appreciate your clarification to my concerns. The explanation of the communication complexity and the construction of the Lyapunov function is clear enough. However, I think in optimization area (rather than \\\"learning theory\\\"), it is necessary to evaluate the method in real-life practical tasks, even if the focus is on theoretical analysis. The application could be lightweight like [1], but only \\\"toy model\\\" is not enough.\\n\\n[1] Liu H, Li Z, Hall D, et al. Sophia: A scalable stochastic second-order optimizer for language model pre-training[J]. arXiv preprint arXiv:2305.14342, 2023.\"}", "{\"title\": \"Rebuttals\", \"comment\": \"**W1:** This is a good question! We indeed do not have access to the averaged model $\\\\bar{\\\\mathbf{x}}^t$, however, there are several reasons to consider the expected gradient norm at the averaged iterate. First, we highlight that using the average iterate $\\\\bar{\\\\mathbf{x}}^t=$ $\\\\frac{1}{n}\\\\sum_{i=1}^n \\\\mathbf{x}\\\\_{i}^t$ in the convergence metric is a standard technique and was done in many previous works [1-4]. Second, from practical considerations, to obtain an averaged model after the end of MoTEF, we can perform a gossip averaging with compression (e.g., Choco-Gossip [2]) that converges linearly, i.e. adds only logarithmic terms in the convergence rate. Therefore, obtaining the average can be done in practice without hurting the rate. Next, in the revised version of the paper, we strengthened our convergence guarantees of MoTEF. In section B.1.1, we provide detailed derivations that show that the consensus error term $\\\\Omega_3^t = \\\\mathbb{E}\\\\left[\\\\\\\\|\\\\mathbf{X}^t -\\\\bar{\\\\mathbf{x}}^t \\\\mathbf{1}^\\\\top\\\\\\\\|^2\\\\right]$ converges to zero as well. Together with the convergence of $\\\\mathbb{E}[\\\\\\\\|\\\\nabla f(\\\\mathbf{x}_{\\\\rm out})\\\\\\\\|^2]$ we ensure that all local models $\\\\mathbf{x}_i^t$ also converge to stationarity. We refer to Corollary A.6 in [5] for derivations of this claim. We also add the derivations in Section B.1.2.\\n\\n**W2:** We thank the reviewer for this comment. We adjusted the references accordingly. \\n\\n**Q1:** We do not obtain the monotonic decrease of the Lyapunov function. It is only possible if the workers use full gradients, i.e. the variance $\\\\sigma^2=0.$ In the stochastic regime, the decrease of the Lyapunov function is done up to the error term which scales with $\\\\sigma.$ Therefore, the current analysis does not automatically imply the convergence of each of the terms in the Lyapunov function. Nonetheless, in the response to W1 we demonstrate that the consensus error term $\\\\Omega_3^t$ converges to zero.\\n\\n[1] Koloskova, Anastasia and Lin, Tao and Stich, Sebastian U and Jaggi, Martin, Decentralized deep learning with arbitrary communication compression, ICLR, 2020.\\n\\n[2] Koloskova, Anastasia and Stich, Sebastian and Jaggi, Martin, Decentralized stochastic optimization and gossip algorithms with compressed communication, ICML, 2019. \\n\\n[3] Zhao, Haoyu and Li, Boyue and Li, Zhize and Richt\\u00e0rik, Peter and Chi, Yuejie, BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression, NeurIPS, 2022\\n\\n[4] Huang, Kun and Pu, Shi, Cedas: A compressed decentralized stochastic gradient method with improved convergence, IEEE Transactions on Automatic Control, 2024.\\n\\n[5] Koloskova, Anastasia and Lin, Tao and Stich, Sebastian U and Jaggi, Martin, Decentralized deep learning with arbitrary communication compression, arXiv preprint arXiv:1907.09356, 2019.\"}", "{\"comment\": \"We understand the reviewer\\u2019s interests in our algorithm\\u2019s practical performances in larger scale experiments. While we are currently constrained by computational resources, we are actively working on extending our experiments to larger scales, although these efforts may not be completed within the rebuttal period.\\n\\nWe thank the reviewer for appreciating the importance of our work for its advancements in the theory of decentralized optimization with communication compression. We believe these theoretical insights represent a meaningful contribution to the field, independent of the scale of experiments, and we are committed to exploring their practical implications further in future work.\"}", "{\"summary\": \"The authors propose new algorithms for decentralized nonconvex optimization with heterogeneous functions, communication compression, and calls to stochastic gradients.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"As far as I know, the state of the art as summarized in the paper and Appendix A is correctly presented. The contributions are important, as nonconvex decentralized optimization is a timely topic with a wide range of applications.\", \"weaknesses\": \"My main concern is the following. In Table 1, it is stated that convergence is established with respect to E[||nabla.f(x_out)||] for an appropriately chosen x_out, which as its name suggest should be constructed and output by the algorithm. However, the main result, Theorem 1, is established for x_out = bar{x}_t for a random t. The problem is that bar{x} is the average of the local variables x_i, which is not available! So you only prove one half of a valid convergence statement. The second half is that the method achieves a consensus, which in your case corresponds to Omega_3 converging to zero. Reasoning on bar{x} violates the conditions of decentralized optimization, where communication is assumed to be possible only through the network edges, and with compression.\\n\\nIs x_out = bar{x}_t used in the experiments? In that case this is clearly unfair to the other methods which do not use this unaccessible oracle.\", \"minor_comments_on_the_state_of_the_art\": [\"The paper about LEAD by Liu et al. \\\"Linear convergent decentralized optimization with compression\\\" has been published at ICLR 2021.\", \"The title \\\"Randcom: Random communication skipping method for decentralized stochastic optimization\\\" of the paper arXiv:2310.07983 has changed\"], \"questions\": \"Does it follow from Lemma 1 that in the conditions of Theorem 1, Phi^{t+1} <= Phi^t? This would imply that all quantities in (8) remain bounded (ideally, they would be proved to tend to zero).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes MoTEF which achieves faster asymptotic convergence rate on decentralized optimization with communication compression, without using strong assumptions such as bounded gradient, bounded heterogeneity or unbiased compression. A variance-reduction version called MoTEF-VR is also introduced. Ablation studies show that MoTEF enjoys linear speed-up and is robust to network topology. Numerical experiments show that MoTEF performs better than Choco-SGD and BEER.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This work achieves the fastest asymptotic convergence rates with weakest assumptions.\\n2. The presentation is neat and clear.\", \"weaknesses\": \"1. The improvement on theoretical convergence result is not significant. Compared to CEDAS, it seems that the only improvement is removing the need for an additional unbiased compressor. To better illustrate this improvement, it is expected to validate whether using contractive compressors are more efficient than using unbiased ones. Otherwise, maybe the authors can compare the full convergence complexity (instead of the asymptotic one only) to address the theoretical improvement.\\n2. The numerical experiments are not persuasive enough. The compared baselines are Choco-SGD and BEER, which are in 2022 or earlier, and their convergence rate is clearly worse than SOTA as illustrated in Table 1. In contrast, CEDAS that seems closer to SOTA convergence rate is not compared. Maybe the authors can make the experimental results more solid by adding more baselines like CEDAS and DeepSqueeze.\", \"questions\": \"1. Can the authors better illustrate the advantage of MoTEF against CEDAS both theoretically and empirically? For example, in what sense using contractive compression is better than unbiased compression, and whether MoTEF can perform better than CEDAS?\\n2. The result for CNN seems missing. Please make sure to include both the results and the implementation details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal (part 2)\", \"comment\": \"**Q5:** In the experiments with MLP model, we use MNIST dataset. Moreover, we point out that we provide the experiments with CNN model on MNIST dataset in the appendix. The main contribution of our work is a novel MoTEF and MoTEF-VR algorithms with their convergence analysis. We provide convergence guarantees in general non-convex and PL regimes. We support our theoretical findings in training logistic regression with non-convex regularization as well as training of MLP and CNN (in the appendix) models. In all the cases, the empirical results demonstrate the superiority of MoTEF algorithm. Moreover, we provide the experiments that showcase the robustness of MoTEF algorithm to the changes of the network topology. For a more comprehensive experimental study with more complex models and real-life decentralized training hardware, we defer them to a more experiment-focused work in the future due to resource constraints.\\n\\n[1] Richt\\u00e0rik et al., EF21: A new, simpler, theoretically better, and practically faster error feedback, NeurIPS, 2021. \\n\\n[2] Koloskova et al., A unified theory of decentralized sgd with changing topology and local updates, ICLR, 2020. \\n\\n[3] Fatkhullin et al., Momentum provably improves error feedback!, NeurIPS, 2024.\\n\\n\\n[4] Y. Takezawa et al., Momentum tracking: momentum acceleration for decentralized deep learning on heterogeneous data, TMLR, 2023.\\n\\n[5] Koloskova et al., Decentralized stochastic optimization and gossip algorithms with compressed communication, ICML 2019, 2019.\\n\\n[6] Beznosikov et al., On biased compression for distributed learning, JMLR, 2023.\\n\\n[7] Koloskova et al., An improved analysis of gradient tracking for decentralized machine learning, NeurIPS, 2021.\\n\\n[8] Di et al., Double Stochasticity Gazes Faster: Snap-Shot Decentralized Stochastic Gradient Tracking Methods, ICML, 2022.\\n\\n[9] Seide et al., 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs, Interspeech, 2014. \\n\\n[10] Yau \\\\& Wai. Docom: Compressed decentral-\\nized optimization with near-optimal sample complexity. arXiv preprint arXiv:2202.00255, 2022\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We thank the reviewer very much for their dedication to the review process and for taking the time to carefully study our manuscript. We provide detailed responses to all raised concerns. Moreover, we made several changes to the paper and highlighted them in blue color. In particular, $(i)$ we simplified the convergence rate of MoTEF and MoTEF-VR algorithms by removing non-dominant terms; see (10, 11, 13); $(ii)$ we fixed the section D.5 that contains experiments in the training CNN model on MNIST dataset. Now we provide the correct plots of convergence; $(iii)$ we added a comparison against CEDAS in section D.6; $(iv)$ we tightened the analysis of MoTEF algorithm. In section B.1.1 we provide a more accurate analysis that demonstrates that the consensus error $\\\\Omega_3^t = \\\\mathbb{E}[\\\\\\\\|\\\\mathbf{X}^t - \\\\bar{\\\\mathbf{x}}^t\\\\mathbf{1}^\\\\top\\\\\\\\|\\\\_{\\\\mathrm{F}}^2]$ converges to zero. This implies that the local models $\\\\\\\\{\\\\mathbf{x}\\\\_i^t\\\\\\\\}_{i=1}^n$ also converge; see the derivations in Section B.1.2.\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer,\\n\\nWe would like to remind you that the discussion period ends soon. Therefore, we would like to know if there are any other concerns left unaddressed or should be clarified more. We would be happy to provide any further details to answer them. Thank you!\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer,\\n\\nWe would like to remind you that the discussion period ends soon. Therefore, we would like to know if there are any other concerns left unaddressed or should be clarified more. We would be happy to provide any further details to answer them. Thank you!\"}", "{\"title\": \"Thank you for the rebuttal\", \"comment\": \"Thanks for the detailed response.\\n1. For point 1, I consider the bits for Randk should be $32k+\\\\log_2\\\\binom{d}{k}$ with $32k$ bits representing the $k$ entries and $\\\\log_2\\\\binom{d}{k}$ bits representing the selected $k$ indicies. Anyway, the order $\\\\Theta(k\\\\log d)$ remains the same, which is larger than the lower bound by a factor of $\\\\Theta(\\\\log d)$. The authors are right that RandK can not reach the limit while I have previously overlooked the logarithm term habitually.\\n2. Thank you for presenting fairer comparisons with CEDAS, where MoTEF performs slightly better than CEDAS in two of them and performs similarly in the others. \\n\\nI have no further questions, and I'll raise my score to 6. Good luck!\"}", "{\"title\": \"Response to the reviewer\", \"comment\": \"1. Thank you for your response. We would like to address the remaining concerns.\\nThe lower bound (also called uncertainty principle for communication compression) presented in [1] shows that a compression scheme with a distortion $\\\\alpha$ and $b$ encoding bits (in the worst case) satisfies\\n$$b \\\\approx \\\\frac{d}{2}\\\\log\\\\frac{1}{\\\\alpha},$$\\nwhere we ignore $\\\\mathcal{O}(\\\\log d)$ terms. [1] shows that there is an example of biased compressor which matches the lower bound. For the unbiased $Q$ compressor with parameter $\\\\omega > 0$ (for example, for Rand-k $\\\\omega = \\\\frac{d}{k}-1$) we have that $\\\\frac{1}{1+\\\\omega}Q$ is the biased compressor with parameter $\\\\alpha=\\\\frac{\\\\omega}{1+\\\\omega}$. Plugging this number in the lower bound we get that for the unbiased compressor it matches the lower bound if\\n$$b \\\\approx \\\\frac{d}{2}\\\\log(1+1/\\\\omega).$$\\nParticularly, in the case of Rand-k compressor we have $\\\\omega=\\\\frac{d}{k}-1$, and therefore we should have\\n$$b \\\\approx \\\\frac{d}{2}\\\\log(d/(d-k)).$$\\nHowever, for Rand-k compressor we have $b = 32k \\\\log d$ which is larger than $\\\\frac{d}{2}\\\\log(d/(d-k))$ if $k$ is sufficiently small than $d$. Therefore, Rand-k compressor does not satisfy the lower bound. This is also illustrated in [10], Figure 1 where the authors demonstrate that biased compressors are closer to the lower bound than unbiased ones.\\nMoreover, we also refer to Lemma 20 in [2] where the authors analyze the difference in the compression error of Top-k and Rand-k compressors, In particular, they demonstrate that if the entries of the input follow the standard exponential distribution then Top-K compressor's error is much smaller than that of Rand-k. In Figure 1 they also demonstrate that Top-k compressor in average requires less bits to encode one entry than Rand-k with the same normalized variance. Finally, Figure 2 in [2] showcases that Top-k compressor saves \\\"more information\\\" about the input than Rand-k for practical gradient distributions.\\n\\n These observations demonstrate the superiority of biased compressors both theoretically and practically. \\n\\n [1] Albasyoni, Alyazeed and Safaryan, Mher and Condat, Laurent and Richt\\u00e1rik, Peter, Optimal gradient compression for distributed and federated learning, arXiv preprint arXiv:2010.03246, 2020\\n\\n [2] Beznosikov, Aleksandr and Horv\\u00e1th, Samuel and Richt\\u00e1rik, Peter and Safaryan, Mher, On biased compression for distributed learning, Journal of Machine Learning Research, 2023\\n\\n [10] Safaryan, Mher and Shulgin, Egor and Richtarik, Peter, Uncertainty principle for communication compression in distributed and federated learning and the search for an optimal compressor, A Journal of the IMA, 2022.\\n\\n2. We will provide further experiments soon with a larger grid search in terms of gradient norm vs transmitted bits.\"}", "{\"summary\": \"This paper proposes a novel approach MoTEF to achieve an asymptotic rate matching that of distributed SGD under arbitrary data heterogeneity by adding momentum tracking and error feedback technique, solving a theoretical obstacle in decentralized optimization with compression. This paper conducts numerical experiments to illustrate the effectiveness of MoTEF.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. MoTEF achieves the convergence rate matching distributed SGD without strong assumptions, such as bounded gradient or global heterogeneity bound. It is an important improvement in distributed optimization with compression.\\n2. MoTEF supports arbitrary contractive compressors (variance-bounded estimate) without unbiasedness.\\n3. Extension MoTEF to the stochastic setting can achieve an improved rate with variance reduction.\\n4. This paper proposes theoretical analysis under the PL condition.\", \"weaknesses\": \"1. The comparison needs to be more clarified and detailed. Especially, the total communication complexity is important in optimization with compression. Most compression algorithms can only reduce the communication overhead of single-step iteration, but cannot reduce the total communication overhead required for convergence. It is necessary to discuss it in detail.\\n2. Though the numerical experiments are enough to illustrate the effectiveness of MoTEF, more evidences in practical problems are necessary. For example, a lightweight training on transformers instead of only MLP.\", \"questions\": \"1. Though the proof is clear enough, I am interested in the insight of the construction of the Lyapunov function. Adding an overview of the technique before the theoretical results is better.\\n2. It is a valuable study. If the author explains my concerns, I would like to improve my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposed a decentralized stochastic optimization algorithm with communication compression with momentum tracking and error feedback. While the algorithmic ingredients are not new per se, the assembly and analysis that leads to a state-of-the-art convergence rate for an important problem is worth acceptance.\", \"additional_comments_on_reviewer_discussion\": \"While some reviewer has suggested including larger scale experiments, the AC finds the current experiment adequate for an optimization-focused paper. Improving the network dependency, as raised by some reviewer, will be an area of interest for future research.\"}", "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Thank you for your thoughtful response. While some of your points have addressed my concerns, the paper's worse dependency on network topology remains an issue. I believe my current rating accurately reflects the paper's value.\"}", "{\"title\": \"Further concerns\", \"comment\": \"Thanks for the response and the additional experiments on CEDAS. There remains several concerns to be addressed.\\n\\n1. While the theoretical improvement of MoTEF that it ensures convergence on contractive compressors while CEDAS only ensures convergence on unbiased compressors is clear, the reason why contractive compression is better than unbiased compression is not quite clear. Although [1] presents the near-optimal results of contractive compressors, I consider unbiased compressors can also reach this limit. Specifically, I believe the limit in [1] is reachable by rand-K compressors, which have unbiased cousins.\\n\\n2. I have some suggestions in the additional experiments. First, it seems that in three out of four experiments, CEDAS cannot converge precisely, which seems contradict to its convergence proofs. It's possible that hyperparameters are not chosen properly. Second, it is suggested to compare the communicated bits rather than the number of iterations between two algorithms, which is more consistent in the paper, and makes fairer comparisons. It appears that CEDAS only communicates once per iteration and MoTEF communicates twice. If so, comparing them in the number of iterations is unfair.\"}", "{\"title\": \"Rebuttals\", \"comment\": \"**W1:** First, MoTEF provably converges with contractive compressors (e.g., Top-K) while CEDAS converges only with unbiased compressors (e.g., Random-K). The class of contractive compression operators is more general and contains operators such as Top-K that do not have unbiased property. Many earlier works demonstrated that contractive compressors are superior in practice [3-5], known to be near-optimal in theory [1], achieve smaller variance both theoretically and empirically than their unbiased cousins [2]. Moreover, combining unbiased compressor with contractive one improves the practical performance [6]. Second, MoTEF uses momentum mechanism which is known to accelerate the performance both theoretically [7] and practically [8-9] in training deep models.\\n\\n**W2:** We provide the empirical comparison of MoTEF and CEDAS algorithms on the logistic regression with non-convex regularization in section D.6. We demonstrate that MoTEF achieves smaller gradient norm in most of the cases that showcase its practical superiority as well.\\n\\n**Q1:** we refer to the responses to **W1** and **W2**\\n\\n**Q2:** We thank the reviewer for pointing out the typo. In the revised version of the paper, Figure 6 contains the empirical results in training CNN model on MNIST dataset and Section D.5 describes the training details and results.\\n\\n[1] Albasyoni, Alyazeed and Safaryan, Mher and Condat, Laurent and Richt\\u00e1rik, Peter, Optimal gradient compression for distributed and federated learning, arXiv preprint arXiv:2010.03246, 2020\\n\\n[2] Beznosikov, Aleksandr and Horv\\u00e1th, Samuel and Richt\\u00e1rik, Peter and Safaryan, Mher, On biased compression for distributed learning Journal of Machine Learning Research, 2023\\n\\n[3] Lin, Yujun and Han, Song and Mao, Huizi and Wang, Yu and Dally, William J, Deep gradient compression: Reducing the communication bandwidth for distributed training, arXiv preprint arXiv:1712.01887, 2017.\\n\\n[4] Haobo Sun and Yingxia Shao and Jiawei Jiang and Bin Cui and Kai Lei and Yu Xu and Jiang Wang, Sparse Gradient Compression for Distributed SGD, International Conference on Database Systems for Advanced Applicationsl 2019.\\n\\n[5] Vogels, Thijs and Karimireddy, Sai Praneeth and Jaggi, Martin, PowerSGD: Practical low-rank gradient compression for distributed optimization, NeurIPS, 2019.\\n\\n[6] Horv\\u00e1th, Samuel and Richt\\u00e1rik, Peter, A better alternative to error feedback for communication-efficient distributed learning, ICLR, 2021.\\n\\n[7] Cutkosky, Ashok and Mehta, Harsh, Momentum improves normalized sgd, ICML, 2020.\\n\\n[8] Choi, Dami and Shallue, Christopher J and Nado, Zachary and Lee, Jaehoon and Maddison, Chris J and Dahl, George E, On empirical comparisons of optimizers for deep learning, arXiv preprint arXiv:1910.05446, 2019.\\n\\n[9] Fu, Jingwen and Wang, Bohan and Zhang, Huishuai and Zhang, Zhizheng and Chen, Wei and Zheng, Nanning, When and why momentum accelerates sgd: An empirical study, arXiv preprint arXiv:2306.09000, 2023.\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer,\\n\\nWe would like to remind you that the discussion period ends soon. Therefore, we would like to know if there are any other concerns left unaddressed or should be clarified more. We would be happy to provide any further details to answer them. Thank you!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Rebuttal\", \"comment\": \"**W1:** We thank the reviewer for this valuable comment. In this work, we consider the general contractive compression with the parameter $\\\\alpha$ which quantifies the compression quality instead of the compression ratio directly. This definition covers many popular compressors in practice, for which we can discuss the total communication complexity. Below we provide a comparison of total communication complexity based on Top-$K$ compressor which is a contractive compressor with $\\\\alpha =K/d$. In each round of communication, a client sends data of size proportional to $K$ (up to $\\\\log(K)$ terms) to its neighbor, instead of the dimension $d$. Therefore, the total communication complexity of the algorithm with Top-$K$ compressor is proportional to $K \\\\times \\\\text{ number of iterations }$,\\nwhile for the non-compressed methods, the total communication complexity is $d \\\\times \\\\text{ number of iterations }$. Now plugging our complexity bound in the paper, we have that the total communication complexity of MoTEF with Top-$K$ becomes:\\n$$\\n \\\\mathcal{O}\\\\left(\\\\frac{\\\\sigma^2 K}{n\\\\varepsilon^4} + \\\\frac{\\\\sigma d}{\\\\rho^{5/2}\\\\varepsilon^3} + \\\\frac{d}{\\\\rho^3\\\\varepsilon^2}\\\\right),\\n$$\", \"while_the_uncompressed_decentralized_sgd_with_gradient_tracking_has\": \"$$\\n \\\\mathcal{O}\\\\left(\\\\frac{\\\\sigma^2 d}{n\\\\varepsilon^4} + \\\\frac{d}{\\\\rho^2\\\\varepsilon^2}\\\\right).\\n$$\\nIn the deterministic regime, i.e. when $\\\\sigma^2=0$, our method obtains a $\\\\frac{d}{\\\\rho^3\\\\varepsilon^2}$ rate, which is slightly worse than the rate of the uncompressed method by a factor of $1/\\\\rho$. This is however negligible when the graph is moderately well-connected. More importantly, in the noisy regime which is more typical in the modern machine learning setting, the asymptotically dominant term for MoTEF is $\\\\mathcal{O}(\\\\frac{\\\\sigma^2 K}{n\\\\varepsilon^4})$ which has a $K/d$ factor of improvement over the uncompressed method, $\\\\mathcal{O}(\\\\frac{\\\\sigma^2 d}{n\\\\varepsilon^4})$. \\n\\n**W2:** The main contribution of our work is a novel MoTEF and MoTEF-VR algorithms with their convergence analysis. We provide convergence guarantees in general non-convex and PL regimes. We support our theoretical findings in training logistic regression with non-convex regularization as well as training of MLP and CNN (in the appendix) models. In all the cases, the empirical results demonstrate the superiority of MoTEF algorithm. Moreover, we provide the experiments that showcase the robustness of MoTEF algorithm to the changes of the network topology. For a more comprehensive experimental study with more complex models and real-life decentralized training hardware, we defer them to a more experiment-focused work in the future due to resource constraints.\\n\\n**Q1:** We appreciate the interest of the reviewer in the design of the Lyapunov function. First, the term $F^t$ typically appears in the Lyapunov-type analysis in the non-convex regime. Second, two terms $\\\\Omega_1^t$ and $\\\\Omega_2^t$ are consensus errors that are present in the Lyapunov function as we consider the decentralized training, and local models and gradient estimators eventually should be close to each other. Next, due to presence of the compression there are additional terms $\\\\Omega_3^t$ and $\\\\Omega_4^t$ to control the compression error. Finally, the terms $\\\\hat{G}^t$ and $\\\\tilde{G}^t$ are there to control the difference between full gradient $\\\\nabla F(X^t)$ and momentum $M^t$, i.e. to showcase that $M^t$ is a good enough approximation of the true gradient. We emphasize that even though the convergence can be shown without using $\\\\hat{G}^t$ but this term is crucial for proving the linear speedup with $n$ in the asymptotic term $\\\\frac{\\\\sigma^2}{n\\\\varepsilon^4}$. Therefore, it is important to have both terms in the Lyapunov function. The coefficient next to each term are chosen to balance the descent rates of all the terms.\"}", "{\"comment\": \"Here are the explicit references. We apologise for the accidental omission in our previous response.\\n\\n[1] Chung-Yiu Yau and Hoi-To Wai. Docom: Compressed decentralized optimization with near-optimal sample complexity. arXiv preprint arXiv:2202.00255,\\n2022.\\n\\n[2] Richt\\u00e0rik, Peter and Sokolov, Igor and Fatkhullin, Ilyas, EF21: A new, simpler, theoretically better, and practically faster error feedback, NeurIPS, 2021. \\n\\n[3] Koloskova, Anastasia and Loizou, Nicolas and Boreiri, Sadra and Jaggi, Martin and Stich, Sebastian, A unified theory of decentralized sgd with changing topology and local updates, ICLR, 2020. \\n\\n[4] Fatkhullin, Ilyas and Tyurin, Alexander and Richt\\u00e0rik, Peter, Momentum provably improves error feedback!, NeurIPS, 2024.\"}", "{\"summary\": \"This paper studies decentralized stochastic optimization with communication compression. It introduces the momentum tracking technique with error feedback, and achieves the first linear speedup convergence rate under the standard assumptions. Numerical experiments are conducted to validate the theoretical findings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. It combines momentum tracking and error feedback to attain an effective compressed decentralized algorithm.\\n\\n2. It achieves the first linear speedup convergence rate for decentralized algorithms with contractive compressors.\", \"weaknesses\": \"1. The novelty seems a little bit limited. The main idea and analysis techniques seems to be a direct extension of the centralized algorithm EControl (Gao et.al., 2024) to decentralized settings.\\n\\n2. The insight behind the proposed algorithm is not well clarified. Why does the combination of the momentum tracking and error feedback result in the linear speedup rate? It is encouraged to discuss how the algorithms are developed and highlight the insight.\\n\\n3. The dependence on the network topology, as the authors have discussed, is much worse than decentralized algorithms without compression.\", \"questions\": \"1. Please highlight the challenges in analysis and algorithmic developments compared to the EControl algorithm (Gao et.al., 2024).\\n\\n2. Please have an in-depth discussion on how the algorithm is developed. Why does the combination of the momentum tracking and error feedback result in the linear speedup rate? \\n\\n3. If there is no communication compression and error feedback, does your algoithm reduce to the pure momentum tracking algorithm? How does this momentum tracking algorithm compare with the well-known gradient tracking algorithm in convergence rate?\\n\\n4. In your Theorem 1, if the network is fully connected, i.e., rho=1, how does your algorithm compare with state-of-the-art centralized compressed algorithm such as EControl, Error-feedback with momentum, and NEOLITHC?\\n\\n5. The numerical studies are a little bit trivial. In your MLP task, what dataset did you use? Can you evalaute your algorithm over more realistic tasks, such as ResNet on Cifar10?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I apologize for the delayed response and appreciate the authors' efforts in addressing my earlier questions; most of them seem to have been resolved satisfactorily.\\n\\nHowever, I have a minor clarification request: could the authors explicitly provide the references corresponding to [1], [2], [3], and [4] mentioned in the comments? I attempted to cross-check them with the references in the paper, but I am uncertain whether they are being referred to in the intended order. Providing these details would be greatly helpful.\"}", "{\"title\": \"Summary of the discussion period\", \"comment\": \"Dear reviewers,\\n\\nAs the discussion period comes to an end, we would like to summarize it now.\\n\\n1) All reviewers highlighted the importance of our theoretical contribution. In particular, convergence under arbitrary data heterogeneity and arbitrary contractive compression (reviewers rAXz, DZJC, SzmB).\\n2) We provided a detailed comparison of biased and unbiased compressors in the response to reviewer SzmB where we highlight that biased compression schemes are superior in practice, achieve smaller variance both theoretically and practically, and are known to match the lower bound with distortion $\\\\alpha$ and $b$ encoding bits. This is particularly important in the comparison with decentralized algorithms that rely on unbiased compression (e.g., CEDAS). To support these claims empirically, we added the comparison against CEDAS algorithm on non-convex logistic regression.\\n3) We improved the convergence of MoTEF by demonstrating that the consensus error $\\\\Omega_3^t = \\\\mathbb{E}[\\\\\\\\|\\\\mathbf{X}^t - \\\\bar{\\\\mathbf{x}}^t \\\\mathbf{1}^\\\\top \\\\\\\\|^2_{\\\\mathrm{F}}]$. This implies that not only the average models converges, but each local models as well.\\n4) We demonstrated that MoTEF matches the asymptotic and deterministic rates of state-of-the-art centralized algorithms if we set the spectral gap $\\\\rho=1$. Moreover, we showed that MoTEF improves the communication complexity in the asymptotic regime.\\n5) We acknowledge the importance of providing the comparison of MoTEF algorithm against other baselines in training larger models. We work on extending the empirical comparison against other baselines.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"Compression has become a key technique in federated learning to address the primary bottleneck of communication efficiency. This paper introduces a new algorithm, MoTEF, designed for decentralized federated learning with communication compression. The distinctive features of MoTEF include integration with model compression, moment tracking and error feedback altogether.\\n\\nThe authors provide a convergence analysis showing that MoTEF achieves some of the best expected results, notably without requiring heterogeneity assumptions. They discuss convergence for general non-convex functions and for functions that satisfy the PL condition (a broader condition than convexity). Additionally, they present a moment-based variance reduction variant of MoTEF.\\n\\nTheoretical insights into the algorithm are explored through comparisons with existing bounds, as well as through numerical experiments.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"It is impressive that the authors prove a convergence bound for such a complex algorithm without assuming a specific degree of data heterogeneity. Their other assumptions are also reasonable. Although the bound has a suboptimal dependence on \\\\rho, their experiments demonstrate that the algorithm\\u2019s sensitivity to \\\\rho can actually be much lower, offering valuable insights to the community.\\n\\nThe presentation is excellent, with comprehensive discussions that thoroughly compare their results to existing work.\", \"weaknesses\": \"More explanation to what Algorithm MoTEF actually does can improve the paper. From what is is written, it seems the algotihm just puts togther all the previous tricks into one place.\", \"one_minor_suggestion\": \"while the authors say \\\"The codes to reproduce our synthetic experiment can be accessed here\\\", the URL is provided at the end of page 9.\", \"questions\": \"Can you elaborate which of the three trick (GT, moment, error feedback) helps remove the data heterogeneity of the paper?\\n\\nCan you simplify the bounds in (11)? In particular, there seems to be a tradeoff on \\\\alpha among the second term, third term, fourth term. In other words, can you provide a unifying bounds that incorporates these three terms?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the explanations. Proving that the consensus error tends to zero is crucial, and it is good that you managed to prove it. So, I am increasing my score.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"**W1:** We would like to clarify the main differences between MoTEF and EControl. First, MoTEF incorporates EF21/CHOCO-style Error Feedback while EControl uses a more classical Error Compensation mechanism [9]. Having different error mechanisms leads to significantly different analysis techniques (e.g., analysis via virtual iteration vs. a more direct proof, and the design of the Lyapunov function: their Lyapunov function has only 3 terms while ours has 6, i.e. it requires much more involved and technical analysis). Next, EControl archives linear speedup by properly balancing the error term $e^t$, and the error carried from the gradient estimator. In our work, this is done by incorporating momentum which reduces the variances. Finally, in our work we provide the convergence guarantees under PL condition and the convergence of MoTEF-VR that was not done in EControl paper.\\n\\n**W2:** Designing an algorithm with strong convergence guarantees without imposing assumptions on the problem or data is complicated. In MoTEF we incorporate three main ingredients to make it converge faster under arbitrary data heterogeneity. In particular, the combination of EF21-type Error Feedback and Gradient Tracking mechanisms is the key factor in getting rid of the influence of data heterogeneity. We emphasize that not using one of them would lead to restrictions on the data heterogeneity. Indeed, EF21 is known to remove such dependencies in centralized training [1] while the GT mechanism is essential in decentralized learning [2]. Nonetheless, EF21 does not handle the error coming from stochastic gradients and momentum is known to be one of the remedies for it [3]. Without the momentum term, EF21 is known to converge only with large enough batches in the centralized stochastic regime [3]. Moreover, we emphasize that earlier work [10] incorporates similar ideas but fails to achieve optimal asymptotic rate due to the incorrect order of mechanisms in their algorithm. \\n\\nWith appropriately chosen parameters, the momentum technique reduces the variance (between the momentum and the gradient) as the algorithm proceeds. On a more technical level, this variance-reduction property enables us to do a descent analysis on $\\\\hat G^t$, the variance between the **averaged** momentum and the local gradients. Being able to reason with the **averaged** momentum is crucial for achieving the linear speedup because it enables us to analyze the averaged noises (whose variance is reduced linearly in $n$), instead of the individual noises, at each iteration.\\n\\n**Q1:** we refer to the response to **W1**\\n\\n**Q2:** we refer to the response to **W2**\\n\\n**Q3:** This is a great question. Indeed, if we set the compression operator to be identity, momentum parameter $\\\\lambda = 1- \\\\beta$, and slightly modify the variables in MoTEF, then the momentum tracking algorithm from [1] is almost identical to our method. The main difference comes from the fact we also use a mixing step with stepsize $\\\\gamma$ which is needed because of the use of compression (see [5], for instance). [4] also obtain optimal asymptotic rate as we do, but their deterministic rate has slightly better dependency on the spectral gap $\\\\rho:$ $\\\\rho^{5/2}$ instead of $\\\\rho^3$ in our work. However, we highlight that combining momentum tracking and compression is a challenging task as naive use of contractive compressors might lead to divergence [6]. Therefore, the Error Feedback mechanism is needed to tackle this issue. This translates into more involved and technical proofs. The convergence rate of GT has $\\\\mathcal{O}(\\\\frac{\\\\sigma^2}{n\\\\varepsilon^4} + \\\\frac{\\\\sigma}{(\\\\rho^{3/2} + \\\\rho\\\\sqrt{n})\\\\varepsilon^{3}} + \\\\frac{1}{\\\\rho^2\\\\varepsilon})$ [7]. We observe that both MoTEF and GT achieve optimal asymptotic rate, but GT has slightly better dependency on the spectral gap in the deterministic regime: $\\\\rho^2$ instead of $\\\\rho^3.$\\n\\n**Q4:** If we set $\\\\rho=1$ in the convergence rate of MoTEF, we obtain the asymptotic $\\\\frac{LF^0\\\\sigma^2}{n\\\\varepsilon^4}$ and deterministic $\\\\frac{LF^0}{\\\\alpha\\\\varepsilon^2}$ rates that match those of EControl, EF21-SGDM, and Neolithic. The difference between algorithms is in the middle term(s) in the rate. Neolithic does not have this term since the analysis is performed under a more restricted assumption of the bounded gradient dissimilarity and performing an impractical multi-stage compression mechanism (i.e., several communication rounds per iteration). The middle term in the convergence rate of EControl has a worse dependency on $\\\\alpha:$ $\\\\alpha^2$ instead of $\\\\alpha$ in our work. The convergence rate of EF21-SGDM has two middle terms. The worst of them scales scales with $\\\\alpha^{1/2}$ while it is $\\\\alpha$ in the convergence of MoTEF.\"}", "{\"title\": \"Rebuttals\", \"comment\": \"**W1:** Even though the mechanisms we used in the design of MoTEF algorithm are known in the literature, combining them is a challenging task and requires careful investigation. For example, we highlight that the order of steps in the algorithm plays a crucial role. In [1] they have similar mechanisms in the algorithm. They use two tracking steps: one is inside the momentum term (line 10 in Alg. 1 [1]) and another one in the update of gradient estimators (line 12 in Alg. 1 [1]). Afterward, the gradient estimators' differences are compressed and aggregated among neighbors. Their algorithm design leads to suboptimal asymptotic convergence. In our algorithm, the gradient tracking step is involved in the update of gradient estimator $V$ only (line 8) involving the averaging from the previous step. Then, new gradient estimators' differences are compressed. Based on the discussion above, we believe that combining contractive compression and stochastic gradients with gradient tracking and Error Feedback in decentralized training is a challenging task as it requires proper algorithm design to overcome all difficulties simultaneously without imposing strong assumptions on the problem.\\n\\nTo improve the writing, we aim to add the following discussion about the algorithm design which is also reported in Section A.1:\\n\\nDesigning an algorithm with strong convergence guarantees without imposing assumptions on the problem or data is complicated. In MoTEF we incorporate three main ingredients to make it converge faster under arbitrary data heterogeneity. In particular, the combination of EF21-type Error Feedback and Gradient Tracking mechanisms is the key factor in getting rid of the influence of data heterogeneity. We emphasize that not using any one of them would lead to restrictions on the data heterogeneity. Indeed, EF21 is known to remove such dependencies in centralized training [2] while the GT mechanism is essential in decentralized learning [3]. Nonetheless, EF21 does not handle the error coming from stochastic gradients and momentum is known to be one of the remedies to it [4].\\n\\n**W2:** In fact, it was a hyperlink in the text. In the revised version, we changed it to the footnote for visibility. The link on page 9 is to experiments with real data (logistic regression and neural networks).\\n\\n**Q1:** This is a great question! First, note that EF21 [2] Error Feedback mechanism removes the data heterogeneity assumption in the **centralized** setting. GT mechanism is needed in the **decentralized** training (without compression) as vanilla decentralized SGD is affected by the data heterogeneity [3]. Therefore, we believe that these two mechanisms are essential to designing an algorithm whose convergence is not affected by data heterogeneity in the decentralized training.\\n\\n**Q2:** We thank the reviewer for bringing this comment. After carefully checking the derivations, we found several minor typos in the calculations and fixed them. Moreover, we provide a simplified convergence rate in both non-convex and PL regimes. Indeed, one of the middle terms dominates the other two, and therefore, the rate can be simplified. Similarly, we provide the simplified convergence rate of MoTEF-VR in the revised version of the paper.\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer,\\n\\nWe would like to remind you that the discussion period ends soon. Therefore, we would like to know if there are any other concerns left unaddressed or should be clarified more. We would be happy to provide any further details to answer them. Thank you!\"}", "{\"title\": \"Response to the reviewer\", \"comment\": [\"2. Additional experimental results.\", \"In Figure 7, we compare MoTEF and CEDAS in terms of gradient norm vs. bits. We increased the parameter set for the step sizes; please see the description in Section D.6. We took into account that the communication of one step of MoTEF is two times larger since workers exchange messages twice per iteration. We observe that MoTEF either matches CEDAS or outperforms it in terms of communication complexity.\", \"We would like to emphasize that we demonstrate the best performance that is achievable by setting the parameters from the corresponding set. From the empirical observations, we observe that CEDAS requires smaller step-size parameters $\\\\gamma$ and $\\\\eta$ to achieve the same gradient norm as MoTEF. However, choosing too small step-sizes $\\\\gamma$ and $\\\\eta$ leads to a significantly slower convergence speed of CEDAS.\"]}" ] }
CLVMAUDeJz
Distributed Constrained Optimal Consensus Under a Directed Graph
[ "Xiangzheng Meng", "Jie Mei" ]
In this paper, the distributed constrained optimal consensus problem of multi-agent systems under a directed graph is investigated. We propose two projection-based distributed constrained optimal consensus algorithms: one addressing set constraints and the other tailored for general constraints. Only the relative state is exchanged among agents in these two algorithms. In the stability analysis of case with set constraints, we transform the distributed optimization problem into a constrained leaderless consensus problem by adopting a sliding mode approach. Building on this foundational transformation, we further develop a projection-based distributed constrained optimal consensus algorithm to address general constraints. It is shown that the proposed algorithm achieves an ergodic convergence rate of $O(\frac{1}{k})$ with respect to the first-order optimality residuals. Numerical simulations are conducted to validate the effectiveness of our theoretical results.
[ "Constrained optimal consensus", "multi-agent systems", "set constraints", "general constraints" ]
Reject
https://openreview.net/pdf?id=CLVMAUDeJz
https://openreview.net/forum?id=CLVMAUDeJz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zD3sU48qPs", "t1vyG5H6ts", "g2PI635GYI", "f9JfBJdJc2", "566HoLb9pv", "4LL7OLUilf" ], "note_type": [ "meta_review", "official_review", "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1734655062114, 1730874141630, 1737524051150, 1729256293896, 1730231580829, 1730784991865 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10405/Area_Chair_jHv1" ], [ "ICLR.cc/2025/Conference/Submission10405/Reviewer_YjWw" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10405/Reviewer_WgMd" ], [ "ICLR.cc/2025/Conference/Submission10405/Reviewer_Rjdk" ], [ "ICLR.cc/2025/Conference/Submission10405/Reviewer_8Wic" ] ], "structured_content_str": [ "{\"metareview\": \"This paper investigates the distributed constrained optimal consensus problem in multi-agent systems operating under directed graphs and introduces two projection-based algorithms to address this challenge. One algorithm handles set constraints, while the other is designed for more general constraints. Both methods rely solely on the exchange of relative state information among agents, enhancing privacy.\\n\\nFor set constraints, the authors transform the distributed optimization problem into a constrained leaderless consensus problem using a sliding mode approach. Building on this transformation, they extend their approach to develop a projection-based algorithm for general constraints. The proposed methods are proven to achieve an ergodic convergence rate in terms of first-order optimality residuals, ensuring effective and efficient solutions.\\n\\nTheoretical analysis establishes the algorithms' convergence under specific assumptions, and numerical simulations validate their effectiveness. Results demonstrate the linear convergence of the proposed methods, illustrating their robustness and applicability in distributed optimization tasks for multi-agent systems. These algorithms offer a practical solution for achieving constrained optimal consensus while preserving privacy and computational efficiency.\\n\\nAt least three reviewers commented that the paper is not well written and the content of the paper has a difficult flow. Besides that, there are a number of technical issues raised about the efficiency of the algorithm, the sufficiency of the empirical results, and some of the assumptions. However, overall, the quality of presentation overshadows all other concerns.\", \"additional_comments_on_reviewer_discussion\": \"The authors did not participate in the rebuttal process.\"}", "{\"summary\": \"In this present paper, the authors presented the projection-based distributed constrained optimal consensus algorithms to solve distributed constrained optimal consensus problem of multi-agent systems. Under certain assumptions, they proved the convergence of these algorithms. Numerical examples were presented to illustrate the efficiency of the algorithms.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The presentation of the paper is well organized and the theoretical results are clear. The mathematics, as far as I have checked, are correct.\", \"weaknesses\": \"Beside the theoretical results, the efficiency of the proposed algorithms were not well exposed. For instance, my main concern is that it lacks of detailed comparison with existing works. The numerical examples are insufficient. There was only one baseline algorithm to be compared with the DPS (should be DSP) algorithm (6). A number of works, including the reference the present paper, should be taken into considerations.\\n\\n As for the theoretical part, the technical proof is conventional. If the contribution of the paper was theoretical, there was lack of validation of its theoretical contribution, for instance, to be employed to analyze other DSP algorithms. \\n\\nThe paper needs thorough proofreading. There are a number of typos and grammatic errors. I can point out only a few them:\", \"line_29\": \"\\\"construct\\u201c should be \\\"constructing\\\";\", \"line_45\": \"\\\"uses\\\" should be \\\"use\\\"; \\u201dhas\\u201c should be \\\"have\\\";\", \"line_111\": \"the second word \\\"graph\\\" should be removed.\", \"line_137\": \"\\\"spinning\\\" should be \\\"spanning\\\".\\nline 484, \\\"DPS\\\" should be \\\"DSP\\\".\\netc.\", \"questions\": \"1. How do the projection operators work for the optimization performance? I suggested some ablation experiments to show the performance of these projection operators.\\n2. How to realize the projection under complicated constraints? For some complication constraints, it is always difficult to realize a proper projection from the outside to the constrained region. For instance, the variable regions of the constraints is nonconvex or the dimensions of the variables is very high.\\n3. How is the sliding mode approach used in the algorithm? I cannot see the details of the sliding mode in the present paper. So it is difficult to me to validate its novelty or contribution.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper develops algorithms for distributed optimization in directed graphs under agent-available constraints (both set and set + inequalities/equalities).\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Methods with constant step-size for distributed constrained optimization.\", \"Connections with the sliding mode technique.\"], \"weaknesses\": [\"Restrictive and inappropriate assumptions: Assumption 2.1 (balanced graph) is restrictive, while Assumption 4.3 is not suitable because $\\\\gamma_{ik}$'s are updated in the algorithm, so their boundedness cannot be assumed for convergence analysis (instead it requires a proof).\", \"Very poor writing.\", \"Step-size selection (also in a distributed manner) is not properly discussed.\", \"The problem chosen for experiments is artificial (as opposed to choosing one with possible applications, e.g., MPC). The step-size parameters were manually tuned for best performance (this is unpractical). There is only one baseline method to compare against.\"], \"questions\": \"Questions:\\n\\n1. Can you relax the assumption on the graph being balanced? This seems quite restrictive in real applications. Please also compare your contributions with the rich related literature on the subject (e.g., DIAG, push-sum, push-pull, etc.). \\n2. Can you provide a rigorous proof of boundedness for $\\\\gamma_{ik}(k)$ so as to omit Assumption 4.3? If so, does it apply locally (close to optimality) or globally? Alternatively, can you revise your analysis to avoid this assumption? Please also remove or replace the statement about \\\"continuous experimentation and testing\\\" in lines 407-408 with a more rigorous justification.\\n3. Can you provide a step-size selection rule that is efficiently computable in a distributed manner? In particular:\\na) Can you please explicitly describe how equations (30) and (42) can be computed in a distributed manner, if possible?\\nb) If not, can you propose an alternative distributed method for step-size selection and analyze its cost?\\nc) Can you also please provide clear ranges or guidelines for selecting $\\\\alpha_i,\\\\alpha_{v_i},\\\\alpha_{\\\\gamma_{ik}}$ in Theorem 4.4?\\n4. Can you please \\na) provide experiments on a more realistic application problem (such as distributed model predictive control or resource allocation)?\\nb) compare with more baseline methods (see 1.above for examples)?\\nc) describe a more systematic and distributedly amenable approach for parameter tuning?\", \"suggestions\": [\"Please add the step-size selection rules and convergence rate in the statements of your main theorems.\", \"I suggest writing Algorithm 1 and Algorithm 2 for the two methods and summarizing the key convergence results in comparison with relevant methods in a table in the main paper. You can compress the background material which is almost 50% of the current paper. There is also a lot of redundancy in the paper, e.g., the equilibrium equations on pages 6 and 8 are obvious.\", \"The language used is not very standard. The terms distributed optimization and consensus optimization are more widely used (compared to \\\"distributed optimal consensus\\\").\", \"The early work on the subject can be traced back to the seminal PhD thesis of John Tsitsiklis in the 80s, not just over the last decade or so as you state (line 30).\", \"Please explain more on the second-order dynamics mentioned in line 46, for completeness.\", \"Please define what you mention as \\\"relative state\\\" (I believe you mean what is most commonly referred to as a local variable in the literature).\", \"Theorem 2.2 is a classic result not worth stating.\"], \"editing\": [\"I am very sorry to say that the authors did not even make a minimal effort to proofread their paper before submitting it.\", \"The amount of typos and poor phrasing & writing style are excessive in this paper. Below is a non-exhaustive list:\", \"line 16: case -> the case.\", \"line 27: optimization of -> optimization for.\", \"line 29: construct -> constructing.\", \"line 45: has linear -> have linear.\", \"line 54: of incorporating -> to incorporate.\", \"line 63: primal-dual method -> primal-dual methods.\", \"line 88: the constant -> constant.\", \"line 89: By omitting local set constraints -> In the absence of local set constraints.\", \"line 111: balanced graph -> balanced; additionally, lines 111-112 should be placed after line 119.\", \"lines 131-133: poor phrasing.\", \"line 137: spinning -> spanning.\", \"line 148: delete i.e.,; also mention that $M_l$ depends on $U$.\", \"line 151: delete superscript ^2 in the definition of Lipschitz continuity.\", \"line 154: general -> generalized.\", \"In Definition 2.3, delete \\\"the convex hull of the set\\\".\", \"line 159: better to use different symbol for matrix, e.g., $H$, since $M$ was used for constants. In general, using capital letters for both scalars and matrices is confusing.\", \"line 166: projection variable -> variable.\", \"Using different symbols in Lemma 2.2 and (2) is poor style.\", \"In line 174, I believe the normal cone is defined for $x\\\\in Omega$ (missing).\", \"It is customary to write function of two variables as $V(x,y)$ in Lemma 2.3 and thereafter.\", \"line 192: I do not understand \\\"multiple first-order integrators\\\".\", \"line 197: by a -> that is.\", \"line 199: choice of notation $q$ in $f_i(q)$ is awkward.\", \"line 210 and similar instances (e.g., line 227): equation number suffices, i.e., equation (3) -> (3).\", \"line 215 and line 323, \\\\subseteq in the normal cones should be \\\\supseteq; also in line 323 $\\\\Omega_i$ is misstyped.\", \"line 228: . Or -> or.\", \"line 241: $T$ is not defined (step-size).\", \"line 258: delete \\\"=\\\" at the end of the line.\", \"line 271: $y_i$ should be $s_i$.\", \"In (8), $h_i$ is undefined (gradients).\", \"line 293: The detail of proof is given -> The details of the proof are given.\", \"Lemma 4.1: KarushKuhnTucker -> Karush-Kuhn-Tucker.\", \"line 360: is -> are.\", \"in (13) and thereafter you use $k$ both for iteration count and inequality counter, i.e., $\\\\gamma_{ik}(k)$; please work to fix the notational issues throughout the paper.\", \"line 403: add \\\".\\\" at the end.\", \"I do not understand $\\\\in$ in (17).\", \"In line 477: this should be the most non-standard way of writing a quadratic I have seen; mention also your choice of A_i in the experiments.\", \"after Fig. 1: in (a) of Figure 1 -> in Figure 1(a) [same for (b)].\", \"line 702: young's -> Young's.\", \"line 743: delete \\\"Then, \\\".\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper tackles the constrained optimal consensus problem in a distributed directed graph setting. In particular, the paper tackles two constraint types, namely set constraints and general ones, where both scenarios have dedicated algorithms and convergence analysis provided in the paper. Numerical results are provided to strengthen the claims of the paper and the proposed algorithm is compared to an existing method.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"In general, I appreciated the efforts made by the authors to provide a rigorous analysis of the algorithms discussed in the paper.\", \"The paper emphasizes mathematical results by providing assumptions, definitions, and lemmas to then provide a convergence analysis of the method(s) they propose.\", \"Certain claims are validated in the experiments, such as the faster convergence of the proposed method for set constraints when compared to an existing algorithm.\"], \"weaknesses\": \"**The major weakness of this paper comes from the presentation.** Overall, I find the paper difficult to follow, and difficult to understand the extent of its contribution, which I detail further below:\\n\\n-\\t**Many mathematical results are given without context:** In many cases, the variables, assumptions, definitions, and equations are present without further elaboration or justification. For example, Assumption 2.1 states that the considered graph should be strongly connected and balanced, but it is not explained what is the motivation behind this assumption, nor the terms are clearly defined. Another example is the content of Section 2.2 and Section 2.4 which both consist of several mathematical statements and results placed one after the other without really linking them. A third example would be equation (18) and the sentence surrounding it, where it is not clear what information should the reader note about this statement.\\n-\\t**Proposed algorithms given without context:** Similar to the previous point, the two algorithms proposed in the paper are given without context, lacking a justification of the different terms in (5), (12), and (13). Without providing these, it is very difficult for the reader to understand the originality, novelty, and intuition behind the proposed methods. Every term of these equations should be explained and the novel parts should be emphasized. Currently, it is not clear what parts of the proposed algorithms are new and which ones are parts of existing methods. Additionally, some terms in the proposed algorithms are not defined, for example, $T$ and $\\\\alpha$ are not mentioned before (5).\\n-\\t**The motivation behind the proposed approaches is limited:** In Section 1.1, potential alternative approaches existing in the literature are presented. For the set-constrained case, it is stated in the paper that: \\n\\u201cOne drawback of utilizing diminishing step sizes is a slower convergence rate. Therefore, it is desired to design a projection based algorithm with constant step-sizes.\\u201d\\nAs for the general constraints, the authors mention:\\n\\u201cBesides the difficulties of projection operation, the existence of nonlinear inequality constraints has introduced significant complexities into stability analysis\\u2026\\u201d\\nThese two statements are, in my opinion, not very clear motivations for proposing alternative solutions. *For the former case, is a diminishing step-size that significant of a drawback, are there other studies on this? Additionally, what other drawbacks do the existing algorithms have, and are they also solved by the proposed approach? For the latter case, what type of complexities have been introduced, are they just more complex to analyze or does a stability analysis not exist for those algorithms?*\\n-\\t**The manuscript requires proofreading:** I will give a few examples. (i) The Laplacian $\\\\mathcal{L}$ of a graph is mentioned and used to make mathematical statements, before defining it in the next paragraph, at the beginning of page 3. (ii) At the beginning of Section 3.1, $f_i(q)$ is mentioned where $q$ has not been defined and is probably meant to be $x$. (iii) The algorithm used for comparison in Section 5 is sometimes referred to as DPS, sometimes as DSP. (iv) In Section 5, it is written that \\u201cThe DPS algorithm (6)\\u2026\\u201d, but equation (6) is an equation about the equilibrium analysis. (v) The legend of Figure 1.b refers to equation (24), which is never mentioned in the main text, and is a seemingly unrelated equation found in the appendix.\\n\\nAdditionally, I find that **the simulation section can also be developed further**. At least one more optimization problem can be studied. There also seems to be a mistake in the expression of $f_i(x_i)$ in Section 5, as it does not correspond to a quadratic function as mentioned in the text.\", \"questions\": \"I have provided a few questions in the \\u201cWeaknesses\\u201d part (highlighted in italic). On top of them, I have the following questions:\\n\\n-\\tI saw that a potential missing reference is: \\nLiu Q, Yang S, Hong Y. Constrained consensus algorithms with fixed step size for distributed convex optimization over multiagent networks. IEEE Transactions on Automatic Control. 2017 Mar 10;62(8):4259-65.\\nCan the authors detail how the method they propose compares to this reference?\\n\\n-\\tHow is the analysis different for undirected graphs? In which part does the directed aspect play a role in the methods provided?\\n-\\tIn Section 5, are the presented results obtained through multiple independent runs of the algorithms, for example using a Monte-Carlo method, or are the results based on a single run?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes two projection-based distributed constrained optimal consensus algorithms for multi-agent systems under a directed graph. The proposed algorithms only require agents to exchange their relative state, which helps preserve privacy. Theoretical result in terms of convergence rate is provided along with experiments to show the linear convergence of the proposed algorithms.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper studies the distributed constrained optimal consensus problem of multiagent systems under a directed graph and utilizes constant learning rates to achieve faster convergence compared with conventional decaying learning rate methods. The proposed methods are supported by typical convergence analysis and some experimental results.\", \"weaknesses\": \"1. The paper is not well organized and is hard to follow.\\n2. The experiments are limited. For example, only one synthetic data is tested and the experiments only show the convergence properties of the related errors. The comparison with SOTA does not look convincing in several aspects. First, what is the DSP /DPS algorithm? The two names of the algorithm both appear in the paper while the full name is missing and it is not clear what is the difference of the DSP/DPS(?) algorithm and the proposed method except for the learning rate. Second, a number of optimization methods with constraints have been reviewed in the Introduction part, however, it is not clear why they are not used as benchmarks in the experiments. \\n3. The writing needs improvement. Many equations have space issues and missing some sentences missing punctuation. For example, Line 403 \\\"Stability Analysis: In this part, we have the following assumption\\\" should be \\\"Stability Analysis: In this part, we have the following assumption.\\\", line 270 \\\"equation (7)\\\" should be \\\"Equation (7)\\\". Some symbols are not defined such as $T$ and $\\\\alpha_i$ when they are used in the equations.\", \"questions\": \"See the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
CLImhawlGn
Channel Independence Improves Out-of-Distribution Generalisation in Multivariate Time Series Classification
[ "Tom Ryder", "Xi Chen" ]
Robustness to distribution shift is a necessary property of machine learning models for their safe and effective deployment. However, deep learning models are susceptible to learning spurious features of the in-distribution (ID) training data that fail to generalise to out-of-distribution (OOD) data. Domain generalisation algorithms aim to tackle this problem, but recent studies have demonstrated that their improvement over standard empirical risk minimisation is marginal. We address this problem for multivariate time series classification (TSC), where it is standard practise to use feature extractor architectures that learn with channel dependence (CD), enabling cross-channel patterns to be learned. Inspired by recent success in time series forecasting, we investigate how channel independence (CI) impacts OOD generalisation in TSC. Our experiments on six time series datasets reveal that ID and OOD features exhibit significantly greater distributional divergence when learned with CD compared to CI. As a consequence, models that learn with CI are more robust to distribution shift, evidenced by smaller generalisation gaps (the difference between ID and OOD performance) across datasets. On datasets that have a stronger shift, OOD accuracy is substantially higher for CI than CD.
[ "time series classification", "OOD generalization", "domain generalization" ]
https://openreview.net/pdf?id=CLImhawlGn
https://openreview.net/forum?id=CLImhawlGn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "hK4JgajbQm", "aSdK7hiZt3", "YmjmTbeq8C", "79NzNHslV8", "5BnXgjzaoZ" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730343874325, 1730464616902, 1732184620114, 1730744916446, 1730721756593 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9897/Reviewer_rT8V" ], [ "ICLR.cc/2025/Conference/Submission9897/Reviewer_scfK" ], [ "ICLR.cc/2025/Conference/Submission9897/Authors" ], [ "ICLR.cc/2025/Conference/Submission9897/Reviewer_7Tc3" ], [ "ICLR.cc/2025/Conference/Submission9897/Reviewer_ZEwE" ] ], "structured_content_str": [ "{\"summary\": \"This paper argues that the ongoing discussion of CI v. CD learning in the context of time series regression may also have interesting results in the case of time series classification. The work then demonstrates that a large generalization gap occurs across six real-world human-activity datasets under a particular type of OOD shift. It is demonstrated how this robustness gap can often lead to better performance with the (less powerful) CI models. Finally, some theory is discussed as an explanation for why this robustness gap is occurring.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The work applies to a relatively lesser explored application of time series classification which extends the existing discussion of channel independence and channel dependence which is currently focused on the case of regression.\\n\\nThe work achieves the expected result of better robustness for simpler models.\\n\\nFigures displaying the results make it clear to digest the real-world experiments done across six real-world datasets.\\n\\nFigure 3 begins to give insights into how the individual features may be heterogeneously used by the model, which may be easier to do in the simpler CI space when compared to the CD space.\", \"weaknesses\": \"When transitioning form TSR to TSC, the specific nuances of the classification regime are not also considered and the existing CI v. CD distinction from TSR is exactly copied into the TSC regime. In particular, in line 120-122, the authors introduce a mixed CI/CD approach which extracts CI features which are then combined with another MLP (CD) for final classification. This is a regime which is not possible in the TSR regime and is seemingly left completely unexplored in this work.\\n\\nThe time domain features and frequency domain features are both included but very little analysis is given. Importantly, the major conclusion that CI is generally better than CD does not hold for the results in the frequency space. This is not sufficiently discussed in the current version of the work.\\n\\nOnly one model is used as a representative for all CI models and all CD models. It is unclear how straightforward this distinction is and how robust the results are to different architectures. Given the above change in feature landscape has a significant impact, it should be expected there may also be sensitivity to slight architecture changes.\\n\\nIt is unclear how generically the results are for all of TSC. Some key factors in the empirical evaluation could be potentially identified as limitations. In particular, all six datasets correspond to human trajectories, which may not be representative of all TSC tasks, and second the distribution shift is always corresponding to the shift in human which was taken as the generator of the trajectory. This is a specific type of OOD shift which may be stronger as well as more specific than a general OOD shift. The theoretical results do not seem strong enough to support the genericness of such claims.\", \"questions\": \"Can you clarify if in your analysis you were able to uncover any key features of the datasets where \\\"CI-time\\\" outperformed CD-time\\\" as well as when \\\"CI-freq\\\" outperformed \\\"CD-freq\\\"? How do you think modified architectures like [1] would fit into this dichotomy?\\n\\nThe most important quantity practically speaking is not the generalization gap, but rather the final performance. From this lens, it could be argued that this work has made minimal progress towards understanding the question of whether to use CI or CD in TSC. How do you feel your theoretical results and empirical results support an answer to the question of whether to use CI or CD in the domain of TSC?\\n\\nIn this work, all results seem to be stated for a CNN architecture. Do you also use CNNs for the frequency domain? How well do you think these results will generalize to other architectures and why?\\n\\n\\n[1] \\\"Time Series Classification Using Multi-Channels Deep Convolutional Neural Networks\\\" Yi Zheng, et al. 2014.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript proposes to use the channel independence (CI) method in the time series data classification problem, and conducts theoretical analysis, showing that the CI method can improve the generalization ability of the model for out-of-distribution data. The authors also conduct relevant experimental analysis, showing that the effect of CI is better than CD. Moreover, the authors conduct relevant experiments on frequency domain analysis, and the results also show that CI can enhance the generalization ability of the model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The motivation of this study is very clear. It applies the CI method, which performs well in time series prediction tasks, to time series classification tasks, and achieves significant improvement in results.\\n2. The authors design and conduct a variety of experiments, with clear experimental methods and credible results.\", \"weaknesses\": \"1. The research lacks innovation and novelty, and it only transplants the CI method (arXiv preprint arXiv:2211.14730, 2022.) in the prediction task to the classification task. This change is too simple and lacks sufficient academic value.\\n2. The author's theoretical analysis seems to be mainly based on the related work of (Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In NeurIPS, 2006.), lacking independent research contributions.\", \"questions\": \"1. How can the datasets used by the study reflect the generalization ability of the model to \\u201cout of distribution\\u201d data? Can the authors explicitly show the out-of-distribution data characteristics of the relevant datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"After careful consideration, we have decided to withdraw our paper. We greatly appreciate the time and effort the reviewers dedicated to evaluating our work. The feedback provided is highly constructive and will help us refine and strengthen our research moving forward.\"}", "{\"summary\": \"This paper investigates the impact of Channel Independence (CI) and Channel Dependence (CD) on Out-of-Distribution (OOD) generalization in time series classification (TSC). The authors propose a channel-wise ensemble method that leverages the advantages of CI to effectively handle multivariate TSC problems. They compare this method with CD-based approaches through theoretical analysis and experiments on six real-world datasets. In addition, they analyze the performance of combining time-domain and frequency-domain features to address distribution shifts. The results indicate that while CD performs better on in-distribution (ID) data, CI offers superior OOD generalization capabilities, demonstrating high robustness, especially under significant distribution shifts. Introducing frequency features also improves OOD performance in CD models but does not provide additional benefits in CI models.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper provides a well-articulated motivation for exploring the impact of CI and CD on OOD generalization in TSC, addressing a significant challenge in deploying time series machine learning models in real-world applications.\\n2. By employing domain adaptation theory, the authors offer a theoretical explanation for why CI may generalize more robustly under certain conditions, grounding their experimental findings.\", \"weaknesses\": \"1. The theoretical analysis using domain adaptation theory seems to lack depth and does not connect well to the proposed method. For example, in Section 4.2, the upper bound from Theorem 1 is approximated using Jensen's inequality without discussing the tightness of this bound. In fact, the theoretical inequality applied to CD in the experiment and the theoretical inequality of CI are inequalities that, as a result, have different upper bounds. This omission raises questions about the validity of the theoretical conclusions drawn. For another example, the assumption that distribution shift is solely due to covariate shift ($P_S(X) \\\\neq P_T(X)$ and $P_S(y|X) = P_T(y|X)$) may not hold in real-world scenarios. The paper does not provide empirical evidence to support this assumption within their experimental setup.\\n\\n2. The methodological positioning and contribution are also confused or weak. For example, the authors label their method as a CI method, but it differs from most existing CI methods in the time series forecasting literature. CI methods, like DLinear, use a \\\"shared backbone\\\" applied independently to different channels. In contrast, the proposed method involves training separate models for each channel and combining them in an ensemble. This could cause confusion and may misrepresent the novelty of the approach. References: [1], [2], [3]. For another example, the channel-wise ensemble approach resembles existing methods in time series classification, such as those discussed by Ruiz et al. (2021) and the HIVE-COTE ensemble methods. The paper does not clearly differentiate its contributions from these existing works, nor does it provide direct comparisons. References: [4], [5], [6].\\n\\n[1] Han, L., Ye, H. J., & Zhan, D. C. (2024). The capacity and robustness trade-off: Revisiting the channel independent strategy for multivariate time series forecasting. IEEE Transactions on Knowledge and Data Engineering.\\n\\n[2] Zeng, A., Chen, M., Zhang, L., & Xu, Q. (2023, June). Are transformers effective for time series forecasting?. In Proceedings of the AAAI conference on Artificial Intelligence (Vol. 37, No. 9, pp. 11121-11128).\\n\\n[3] Nie, Y., Nguyen, N. H., Sinthong, P., & Kalagnanam, J. (2022). A time series is worth 64 words: Long-term forecasting with transformers. arXiv preprint arXiv:2211.14730.\\n\\n[4] Ruiz, A. P., Flynn, M., Large, J., Middlehurst, M., & Bagnall, A. (2021). The great multivariate time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Mining and Knowledge Discovery, 35(2), 401-449.\\n\\n[5] Bagnall, A., Flynn, M., Large, J., Lines, J., & Middlehurst, M. (2020). On the usage and performance of the hierarchical vote collective of transformation-based ensembles version 1.0 (hive-cote v1. 0). In Advanced Analytics and Learning on Temporal Data: 5th ECML PKDD Workshop, AALTD 2020, Ghent, Belgium, September 18, 2020, Revised Selected Papers 6 (pp. 3-18). Springer International Publishing.\\n\\n[6] Middlehurst, M., Large, J., Flynn, M., Lines, J., Bostrom, A., & Bagnall, A. (2021). HIVE-COTE 2.0: a new meta ensemble for time series classification. Machine Learning, 110(11), 3211-3243.\\n\\n3. The validation part is also confused or weak. For example, the authors define each participant as a domain but do not verify whether genuine distribution shifts exist between the training and test groups. Statistical analysis demonstrating the presence and extent of distribution differences is necessary to substantiate the claims about OOD generalization. For another example, all six datasets are related to human activity recognition or stress detection. This narrow focus may limit the generalizability of the findings to other domains, such as finance or environmental monitoring.\\n\\n4. Other points may be considered weak. For example, the paper does not compare the proposed method with other state-of-the-art OOD generalization techniques. Without such comparisons, it is challenging to evaluate the method's relative performance and contributions. For another example, the ensemble assigns equal weights to all channel models. The potential benefits of alternative weighting schemes, such as weighting based on individual channel performance, are not explored. Meanwhile, the experiments are conducted exclusively with Fully Convolutional Networks (FCNs). It remains unclear whether the observed benefits of CI extend to other architectures like RNNs or Transformers.\\n\\n5. Finally, while the authors argue that there is not much work on the OOD generalization of the TSC task, they did not pinpoint the exact reason why CI is advantageous over CD in the context of TSC. They just borrow theories and concepts that are non-specific to the TSC task. based on the above concerns, particularly regarding the misrepresentation of the method, insufficient theoretical analysis, lack of verification of distribution shifts, and inadequate comparison with existing methods, I recommend that this paper be rejected in its current form. Addressing these issues could significantly strengthen the work for future submission.\", \"questions\": \"Please see the weaknesses.\\n\\n1. For example, how does your method fundamentally differ from traditional CI approaches that use shared backbones independently on each channel? What is the rationale for labeling your method as CI, and why can't existing CI methods be directly applied in your context?\\n\\n2. For another example, can you please provide more insight into the tightness of the upper bound approximated using Jensen's inequality? How does this approximation impact the validity of your theoretical analysis and the comparison between CD and CI? \\n\\n3. Meanwhile, have you conducted statistical analyses to confirm that the domains (participants) exhibit significant distribution shifts? Providing evidence of genuine distribution differences would strengthen your claims about OOD generalization. \\n\\n4. Finally, can you please validate that your findings can generalize to datasets from other domains beyond human activity recognition, such as financial or transportation time series data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the critical issue of model robustness to distribution shifts in machine learning, particularly within the context of multivariate time series classification (TSC). The authors investigate the impact of channel independence (CI) on out-of-distribution (OOD) generalization and compare it with the conventional approach of channel dependence (CD). Through experiments on six real-world multivariate time series datasets, this paper demonstrates that models employing CI exhibit smaller distributional divergence and thus are more robust to distribution shifts, leading to improved OOD accuracy, especially on datasets with more severe distribution shifts.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper tackles a significant challenge in machine learning\\u2014OOD generalization\\u2014which is crucial for the safe and effective deployment of models in real-world applications. The exploration of CI in TSC is a novel approach that offers fresh insights into improving model robustness.\", \"This paper is supported by empirical evidence from six diverse real-world datasets.\", \"This paper further explores the impact of frequency domain features on OOD generalization within the context of CD and CI, by comparing the distributional differences and classification performance between time domain and frequency domain features, which provides a more comprehensive perspective.\", \"This paper is well-organized and clearly written, making complex concepts accessible and the findings easy to follow.\"], \"weaknesses\": [\"This paper's contribution is somewhat marginal; it focuses more on defining and elucidating the problem without theoretical bounds on the generalization, with less advancement in theory and algorithms, hence its impact on the field of multivariate time series classification is not particularly significant.\", \"The experiments in the paper are relatively weak, which does not strongly support the conclusions, and the use of only one model and a limited number of datasets weakens the robustness of the findings. In detail, experiments on more datasets covering different types of relationships among variables (such as spatial dependencies, mutual influences, etc) could make the conclusion in this paper more convincing.\"], \"questions\": [\"The experimental results in the paper are based solely on a 1D-CNN classification model, which seems somewhat limited. What will happen if we apply non-CNN multivariate time series classification methods, such as those based on transformers (e.g., shapeformer, SVP-T) and contrastive learning methods (Ts-vec)?\", \"There are six datasets used in this paper, which is insufficient to prove the universality of channel independence. In particular, the UCR (UEA) dataset is a standard choice for many multivariate time series classification methods, and it would be interesting to see how the paper's approach fares on this benchmark. In detail, the benchmark datasets used in the article, with the exception of WESAD, all pertain to human activity recognition. Therefore, I suggest incorporating datasets from other domains within the UEA, such as those for motion classification, ECG classification, EEG/MEG classification, and audio spectra classification. Additionally, as mentioned in the survey (Deep learning for time series classification and extrinsic regression: a current survey. ACM Computer Surveys, 2024.), datasets related to earth observation satellites and instruments could be utilized.\", \"Could the authors elaborate on the criteria used for selecting the window lengths and the decision-making process behind the use of overlap in the context of this study?\", \"Given the proximity of anomaly detection tasks to classification tasks, it would be valuable to have experimental results that demonstrate how the paper's conclusions apply to anomaly detection scenarios. Are there any additional experiments or analyses that could provide insights into this aspect? Four benchmark datasets widely used in multivariate time series anomaly detection research include SWaT, SMD, SMAP, and MSL. The extension of different methods and different downstream tasks can make the value of this article more significant.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
CLE09ESvul
What should a neuron aim for? Designing local objective functions based on information theory
[ "Andreas Christian Schneider", "Valentin Neuhaus", "David Alexander Ehrlich", "Abdullah Makkeh", "Alexander S Ecker", "Viola Priesemann", "Michael Wibral" ]
In modern deep neural networks, the learning dynamics of individual neurons are often obscure, as the networks are trained via global optimization. Conversely, biological systems build on self-organized, local learning, achieving robustness and efficiency with limited global information. Here, we show how self-organization between individual artificial neurons can be achieved by designing abstract bio-inspired local learning goals. These goals are parameterized using a recent extension of information theory, Partial Information Decomposition (PID), which decomposes the information that a set of information sources holds about an outcome into unique, redundant and synergistic contributions. Our framework enables neurons to locally shape the integration of information from various input classes, i.e., feedforward, feedback, and lateral, by selecting which of the three inputs should contribute uniquely, redundantly or synergistically to the output. This selection is expressed as a weighted sum of PID terms, which, for a given problem, can be directly derived from intuitive reasoning or via numerical optimization, offering a window into understanding task-relevant local information processing. Achieving neuron-level interpretability while enabling strong performance using local learning, our work advances a principled information-theoretic foundation for local learning strategies.
[ "local learning", "interpretability", "neuro-inspired", "information theory", "partial information decomposition" ]
Accept (Oral)
https://openreview.net/pdf?id=CLE09ESvul
https://openreview.net/forum?id=CLE09ESvul
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xRMureIGc7", "xKXcBBPqWo", "theHTcq9QI", "sc6IOmUl47", "oChKs2J6u4", "mda6LJA1qo", "kGI9hsMZ28", "hxYhU6aTjB", "hFithLwIai", "g9wLOef7rU", "aQV86PcUy9", "a85yr3R0sG", "St2HfVaPwd", "QFFAnGfr0C", "OXxBjNpbiE", "OK2XQ3gltQ", "GcNhvtckHt", "EWApOHqsda", "E7H4olCD1T", "A9S7sx86Fy", "8i7xkCAP6K", "22DMsRVp8Z" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1732806705515, 1737524064977, 1732189250147, 1732982276279, 1732930188221, 1732189337093, 1732189061301, 1732188975034, 1732715445533, 1730391579814, 1732575466582, 1732189221271, 1732721527009, 1732188805327, 1732189350746, 1732561936194, 1733607465746, 1730669767243, 1733152703407, 1732485252745, 1730527543809, 1730505375349 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10601/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10601/Authors" ], [ "ICLR.cc/2025/Conference/Submission10601/Reviewer_dmEh" ], [ "ICLR.cc/2025/Conference/Submission10601/Reviewer_Mc5F" ], [ "ICLR.cc/2025/Conference/Submission10601/Authors" ], [ "ICLR.cc/2025/Conference/Submission10601/Authors" ], [ "ICLR.cc/2025/Conference/Submission10601/Authors" ], [ "ICLR.cc/2025/Conference/Submission10601/Authors" ], [ "ICLR.cc/2025/Conference/Submission10601/Reviewer_wfkU" ], [ "ICLR.cc/2025/Conference/Submission10601/Reviewer_dmEh" ], [ "ICLR.cc/2025/Conference/Submission10601/Authors" ], [ "ICLR.cc/2025/Conference/Submission10601/Reviewer_dmEh" ], [ "ICLR.cc/2025/Conference/Submission10601/Authors" ], [ "ICLR.cc/2025/Conference/Submission10601/Authors" ], [ "ICLR.cc/2025/Conference/Submission10601/Reviewer_wfkU" ], [ "ICLR.cc/2025/Conference/Submission10601/Area_Chair_4eVi" ], [ "ICLR.cc/2025/Conference/Submission10601/Reviewer_SbrR" ], [ "ICLR.cc/2025/Conference/Submission10601/Authors" ], [ "ICLR.cc/2025/Conference/Submission10601/Reviewer_SbrR" ], [ "ICLR.cc/2025/Conference/Submission10601/Reviewer_Mc5F" ], [ "ICLR.cc/2025/Conference/Submission10601/Reviewer_dmEh" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your interest in the implementation details, which helps us make our methodology clearer.\\n\\nTo compute the PID, the joint probability masses $p(y, \\\\hat{f}, \\\\hat{c}, \\\\hat{l})$ for each bin, i.e., for all combinations of $y$ and the binned $\\\\hat{f}$, $\\\\hat{c}$ and $\\\\hat{l}$, are required (i.e., 2x20x20x20 values in total, of which, however, many are zero). To convert the conditional probability $p(y \\\\mid f, c, l)$ for each sample to $p(y \\\\mid \\\\hat{f}, \\\\hat{c}, \\\\hat{l})$ for each bin, the per-sample values $p(y \\\\mid f, c, l)$ where $(f, c, l)$ fall into the same bin $(\\\\hat{f}, \\\\hat{c}, \\\\hat{l})$ are averaged. In our implementation, $p(y, \\\\hat{f}, \\\\hat{c}, \\\\hat{l})$ is constructed as a weighted histogram of size 2x20x20x20. For each sample $(f, c, l)$, the probability $p(y\\\\mid f, c, l)=\\\\sigma(A(r,c,l))$ is computed, after which each sample is added to the histogram twice: Once into the bin $(y=+1, \\\\hat{f}, \\\\hat{c}, \\\\hat{l})$ with a weight of $p(y=+1\\\\mid f, c, l)$ and once to the bin $(y=-1, \\\\hat{f}, \\\\hat{c}, \\\\hat{l})$ with weight $p(y=-1 \\\\mid f, c, l)=1-p(y=+1\\\\mid f, c,l)$. Finally, the histogram is normalized by dividing it by the number of samples in the batch, producing $p(y, \\\\hat{f}, \\\\hat{c}, \\\\hat{l})=p(y\\\\mid f, c, l)p(\\\\hat{f}, \\\\hat{c}, \\\\hat{l})$, where $p(\\\\hat{f}, \\\\hat{c}, \\\\hat{l})$ is obtained implicitly from the frequency of samples falling into the corresponding bin.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"comment\": \"> There seems to be a gap between the claimed \\\"neuron-level interpretability\\\" and the proposed learning framework. Usually, interpretability means understanding how a trained network makes predictions and what the network has learned. However, the method proposed in this paper only provides interpretation for the learning objective, rather than insight of what the model and each neuron has learned. Furthermore, the interpretability is still global, not neuron-level, as the goal parameters are shared across all neurons, i.e., the interpretation for all neurons are the same. Finally, the cross-entropy loss and the backpropagation process are quite human interpretable, in my opinion. Could the authors comment on this?\\n\\nPlease refer to the general comment for an explanation of how the notion of interpretability that infomorphic neurons offer is on a different abstraction level than task-level interpretability. While the goal parameters are currently shared by all neurons, all information that the neuron requires for training is provided locally and not by a global backpropagation signal, which is why this description gives a better insight into the actual information processing necessary at a local scale to fulfill a global goal in a self-organized fashion. Investigating whether giving neurons of the same layer different goal functions can improve overall performance remains an open research question. While cross-entropy loss gives a good intuition on the mechanistic goal of the output layer's neurons, it remains opaque what information-theoretic goals the neurons of earlier layers need to fulfill in order to provide the best intermediate representation to the output layer.\\n\\n> \\\"For the supervised classification task at hand, the function ... has been chosen ... This ensures that the network performs similarly during training, when context and lateral inputs are provided, and for evaluation, where the context signal is withheld.\\\" It seems that the given activation function is for training, since it uses the context (label) as input. Can the authors define clearly what the activation function during testing is?\\n\\nDuring testing, the same activation function is used, but the context input is set to zero. We have added a clarification for this in the manuscript.\\n\\n> \\\"One promising path towards constructing deeper networks is using stacked hidden layers that receive feedback from the next layer, similar to setup 3.\\\" It is a bit unconvincing to replace label feedback by feedback connection from the next layer. In the latter, the neuron's learning goal is to capture the part of the feedforward signal that agrees with the next layer's output, which is less intuitive than capturing the part that agrees with the label.\\n\\nThe idea to use feedback connections as context instead of the full label is to achieve more biological plausibility and locality by not providing the full label information to every hidden layer. The idea is for the network to train hierarchical representations \\\"from back to front\\\", the later layers informing the earlier layers which important information to forward.\\nNevertheless, we agree with the reviewer that this idea needs further development and providing the full label to all hidden layers marks the most straightforward and promising generalization to multiple hidden layers for now. We have included an appendix section which summarizes our preliminary results on this topic, showing that deeper infomorphic networks successfully train with the same goal functions as their shallow counterparts and achieve similar performance for fewer trained weight parameters.\\n\\n> Figure 8, comparision with bivariate model: Since the goal is to compare trivariate with bivariate, it might be better to put trivariate and bivariate on the same axis. If it'd be too crowded, maybe group the results by \\\"Heuristic\\\"/\\\"Optimized\\\".\\n\\nWe agree that this change facilitates the most important comparisons and have made the suggested changes to Figure 8.\\n\\n> In \\\"Goal parameters\\\" paragraph in Experiments, it might be better to state that \\\"heuristic goal function\\\" is $\\\\Pi_{\\\\{F\\\\}\\\\{C\\\\}}$. Although it can be inferred from Fig. 4, it's better to define it in the text as well.\\n\\nThe heuristic trivariate goal function for classification tasks is introduced in the last paragraph of Section 3 as the combination of two bivariate goal functions. We have highlighted this section better in the manuscript.\\n\\n> In Fig. 4, is the difference in validation accuracy defined as \\\"after setting to 0 - before\\\"?\\n\\nYes, both Figures 4B and 4D show the same concept of difference in validation accuracy for setting goal function parameters to zero for different learning tasks.\"}", "{\"comment\": \"Thank the authors for their detailed explanation and addressing all my questions. I have increased my score from 6 to 8.\"}", "{\"title\": \"Comments on the authors replies\", \"comment\": \"I would like to thank the authors for clarifying the points I mentioned in the original reviewer report, which addressed all of my concerns.\"}", "{\"comment\": \"> If I did not overlook it, the consistency equations for the trivariate infomorphic neuron model are not included in the paper.\\n\\nFor completeness, we added the consistency equations for trivariate Partial Information Decomposition as an additional appendix section.\\n\\n> The discussion of weight updates is limited. It is mentioned that either autograd or the analytical formulation in (Makkeh et al. 2023) can be used. However, the analytical approach in (Makkeh et al. 2023) applies to bivariate informorphic neuron model. Although the weight update derivation is potentially a similar approach, the absence of an explicit derivation raises a concern. Without this, I am uncertain if the resulting learning is biologically plausible.\\n\\nWhile analytical gradients can in principle be derived for the trivariate case, we have so far not seen a direct benefit for doing so over using autograd. Please note that the autograd approach results in gradients mathematically equivalent to the explicit analytical formulation, which we have also confirmed empirically for the bivariate case. Since therefore the weight updates take the same local inputs and produce the same outputs, the biological plausibility remains unaffected. As outlined in the discussion section, we do not expect biological systems to implement PID learning goals directly, but they nevertheless may have mechanistic learning rules which implicitly optimize for a similar information-theoretic goal.\\n\\n> It is mentioned that (line 308) the parameters can be derived from intuitive notions. However, unless I overlooked something, the explanation for determining these parameters is not provided in detail beyond a brief note in the caption of Figure 2.D.\\n\\nAs outlined in the last paragraph of Section 3, the intuition behind the trivariate goal function stems from the combination of two intuitive requirements for the neuron's goal: Firstly, a neuron should encode in its output information which is coherently encoded in both the input to the network and the label. Secondly, the a neuron should provide information in its output that is unique with respect to what the other neurons already encode. Combining these notions together, the neuron should intuitively encode that part of the information which is redundant between the feedforward and context inputs while simultaneously being unique with respect to the other neurons of the same layer. We highlighted this section better in our manuscript.\\n\\n> I am concerned that plug-in estimation of the information atoms based on empirical probability mass functions may face challenges in high-dimensional settings due to the difficulty of density estimation. While MNIST may be manageable despite being high-dimensional, this could be an issue in more complex settings.\\n\\nPlease refer to the general comment for a response to this point.\\n\\n> Lines 511-515 mention that deeper networks were trained and achieved comparable or better accuracy results; however, these results are not shown. The paper only includes experiments with a single hidden layer.\\n\\nWe have added an appendix section summarizing our preliminary results on deeper networks.\\n\\n> The algorithm in Function 2 (Appendix A.1) requires quantities labeled 'isx_redundancies' and 'pid_atoms' (lines 732 and 733), but it is not specified how these quantities are computed.\\n\\nThe isx_redundancies are computed from the given probability mass function according to the analytical definition of $I_\\\\cap^{\\\\mathrm{sx}}$ defined in Makkeh et al. 2021, while the pid_atoms are computed from these redundancies by means of a Moebius inversion. We expanded the pseudocode section in the paper to incorporate this information.\\n\\n> According to Appendix A.3, the proposed method requires a lot of computational power even for MNIST. This raises concerns about its scalability to more complex datasets.\\n\\nWe believe that optimization of the implementation as well as utilization of architectures taylored to the problems at hand (e.g., convolutional layers for image classification) will allow for more complex tasks to be solvable by infomorphic neurons. Nevertheless, we propose infomorphic networks primarily as a research tool to understand the local information-theoretic goals and suggest to use the insights gained from this analysis in the design of more computationally efficient local learning rules which implicitly optimize the same PID goals for actual applications, which is a subject of future research.\"}", "{\"comment\": \"> The title \\u2018What should a neuron aim for\\u2019 is broader than the scope explored in the current work, where a single-layer bivariate and trivariate local learning framework is studied. There is still a large gap between the interpretability of the model here and that of the neuron. The advantage of interpretability from an application perspective is not demonstrated or discussed, which is expected to be very limited by the single-layer simplicity here.\\n\\nPlease note that, as outlined in the general comment, the more abstract, information-theoretic notion of interpretability that infomorphic neurons provide differs from other notions of interpretability of networks solving a specific task.\\nWhile we believe that the trivariate infomorphic framework has the potential to uncover 'what a neuron should aim for' also in more complex networks, we agree that the title may not optimally reflect the scope of this paper in particular. The most important contribution of this paper is the introduction of trivariate PID goal functions, which incorporate an additional lateral input that allows for neurons to self-organize to encode unique relevant information contributions. Highlighting this self-organization aspect, we thus propose \\\"Neuron, do your part: Self-organizing global computation from local objective functions based on partial information decomposition\\\" as the new title of this work. Please let us know if you find this title adequate or whether you would prefer shortening the original title to \\\"Designing neuron-local objective functions based on information theory\\\", staying closer to the original submission.\\n\\n> How do we understand the performance presented in Fig. 3B by considering that at Nhid = 100, there is a convergence for all results?\\n\\nAs outlined in the legend of Fig. 3B, the hyperparameters of the goal function have been optimized only once for $N_\\\\mathrm{hid}=100$ neurons and then reused for different values of $N_\\\\mathrm{hid}$. Thus, it is expected that the infomorphic networks with $N_\\\\mathrm{hid}=100$ neurons come closest to the backpropagation performance, since the hyperparameters may not be strictly optimal for other values of $N_\\\\mathrm{hid}$. Furthermore, the networks with sparse connectivity have the number of connected neurons limited to at most 100, meaning that it is equivalent to the fully connected case for $N_\\\\mathrm{hid} \\\\leq 100$. We highlighted this fact better in the caption of Figure 3.\\n\\n> Why a sparse lateral outperform a dense lateral under large Nhid conditions?\\n\\nWhy exactly the sparse lateral connections outperform their dense counterparts and what the optimal sparsity level is remains an ongoing area of research. Our best current hypothesis is that once the network layer becomes large enough to encode the relevant label information many times over, it may no longer be optimal for each neuron to strive for unique information with respect to all other neurons. Instead, weakening the uniqueness constraint by considering only a subset of lateral neurons may---especially given the stochastic neuron outputs---lead to more robust representations.\\n\\n> What are the computational and memory costs?\\n\\nPlease refer to Appendix A.3 for an approximate estimation of the compute ressources used in this project.\\n\\n> What does the extraction of high-effect parameters mean, and would this be useful to construct networks with higher performance, or understand the nature of the problems under training?\\n\\nThe parameter importance analysis reveals that the size of the PID goal function parameters alone does not necessarily reflect their significance in the training process, which gives additional insights into which kind of information processing is important to optimize for at the neuron level. In the future, these insights may indeed be used to prune low-effect goal function components to achieve higher efficiency. Furthermore, these insights may aid comparisons to the PID footprint of classical non-infomorphic local learning rules.\"}", "{\"comment\": \"We first want to raise your attention to an inaccuracy in your review: As stated in the paper, infomorphic networks achieve a test set accuracy of 97.5% on MNIST (for 2000 neurons and sparse connections) and 42.5% on CIFAR10 (for only 100 neurons), the latter matching or slightly outperforming backpropagation training on the same stochastic binary-activation networks.\\n\\n> While I believe we should not expect super comprehensive experiments from a paper that initially introduce a novel concept or paradigm, it might be even improve the soundness of this paper if the authors can also include the performance on slightly more challenging datasets. If standard large datasets such as ImageNet are too computationally expensive, the authors could consider variants that are decently big and challenging, such as TinyImageNet.\\n\\nWe agree that validating our findings on more complex tasks is an important next step to establish infomorphic networks. However, our results indicate that in order to solve more complex tasks, more hidden layers and taylored architectures (e.g. convolutional layers) are required, which are subjects of ongoing research. As mentioned in the gerenal comment, however, we have since applied infomorphic networks successfully to the AudioMNIST task, demonstrating the applicability of our approach for tasks beyond image classification.\\n\\n> For demonstration of experimental results, while I like the richness of Figure 3B and I do notice Figure 8 in the appendix, I would personally argue it would be more straightforward to include a \\u201cless exiting\\u201d bar and whiskers plot of backprop as well as different variants of bivariate and trivariate models. I suspect the authors left the performance of the bivariate model to the appendix to avoid overcrowding the results, but this omission from the main text might trigger confusion from the audience. Again, a good old bar and whisker might be a valid solution.\\n\\nWe agree that a simple overview of the network performances of all different setups for a fixed number of $N_\\\\mathrm{hid} = 100$ neurons facilitates the comparison between the different setups. We have thus added a new subplot to Figure 3 which shows all bivariate, trivariate and backprop results.\\n\\n> I still have a hard time understanding how the partial information components are realized in actual implementation. It would be good for the authors to provide a brief explanation on how that is done or point to the relevant text in the paper.\", \"the_computation_of_the_pid_atoms_works_as_follows\": \"First, the aggregated input variables $F$, $L$ and $C$ are computed as weighted sums of the feed-forward, lateral and context inputs. Subsequently, these scalars are quantized to 20 levels each, and together with the known conditional probabilities of the stochastic binary output given the inputs the joint probability mass function is constructed from a batch of 1024 samples. From this probability mass function, the generalized redundancies $I^\\\\mathrm{sx}_\\\\cap$ are computed according to the analytical definition by Makkeh et al. 2021. Since these redundancies can be written as sums of PID atoms, the PID atoms themselves can finally be calculated from the redundancies by a Moebius inversion (i.e., generalized inclusion-exclusion-rule) of this lattice structure. In our manuscript, this procedure is explained in Chapter 4 together with the pseudocode provided in Appendix A.1. We have extended the pseudocode to explain how the redundancies and PID atoms are computed to make this point more clear.\\n\\n> It is a bit uncommon to have \\u201cRelated Works\\u201d and \\u201cLimitations and Outlook\\u201d as part of the Discussion section. Is that intentional or is it just a typo?\\n\\nThank you for this notice. We have reorganized the last sections to adhere to a more standard structure.\"}", "{\"comment\": \"You are correct in assessing that the conditional probability $p(y\\\\mid f,c,l)$ is defined directly from the activation function. Specifically, the probabilities for the two possible outcomes, $y=1$ and $y=\\u22121$, are given by $p(y=1\\\\mid f,c,l)=\\\\sigma(A(f,c,l))$ and $p(y=\\u22121\\\\mid f,c,l)=1\\u2212\\\\sigma(A(f,c,l))$, respectively. Note, however, that these probabilities depend on the weights, as $f$, $c$, and $l$ represent weighted sums of the high-dimensional inputs $\\\\mathbf{X}_F$, $\\\\mathbf{X}_C$ and $\\\\mathbf{X}_L$ of the three sources, respectively. Consequently, the gradients with respect to these weights can be computed straightforwardly using the chain rule.\"}", "{\"summary\": \"This paper introduces trivariate infomorphic neuron model to develop an interpretable and biologically-inspired local objective function for training artificial neural networks. This abstract neuron model is based on Partial Information Decomposition (PID) framework, which decomposes information into unique, redundant, and synergistic components (atoms). Local objective (goal) function is constructed as a linear combination of these PID information atoms. Neural network models with one hidden layer and various configurations (e.g. sparse lateral connectivity, different context signal) are trained with the proposed loss function on MNIST and CIFAR10 classification tasks, demonstrating promising results.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The problem is well-stated in the introduction.\", \"Section 3 and Section 4 effectively introduce the PID framework and explain how it is used to develop local objective functions for training neural networks.\", \"The shared code is well-structured, which aids the reproducibility of the work.\", \"MNIST results seem promising.\"], \"weaknesses\": [\"The paper has a few areas that could benefit from further clarification or expansion. Please see the following points and the Questions section for more details.\", \"If I did not overlook it, the consistency equations for the trivariate infomorphic neuron model are not included in the paper.\", \"The discussion of weight updates is limited. It is mentioned that either autograd or the analytical formulation in (Makkeh et al. 2023) can be used. However, the analytical approach in (Makkeh et al. 2023) applies to bivariate informorphic neuron model. Although the weight update derivation is potentially a similar approach, the absence of an explicit derivation raises a concern. Without this, I am uncertain if the resulting learning is biologically plausible.\", \"It is mentioned that (line 308) the $\\\\gamma$ parameters can be derived from intuitive notions. However, unless I overlooked something, the explanation for determining these parameters is not provided in detail beyond a brief note in the caption of Figure 2.D.\", \"I am concerned that plug-in estimation of the information atoms based on empirical probability mass functions may face challenges in high-dimensional settings due to the difficulty of density estimation. While MNIST may be manageable despite being high-dimensional, this could be an issue in more complex settings.\", \"Lines 511-515 mention that deeper networks were trained and achieved comparable or better accuracy results; however, these results are not shown. The paper only includes experiments with a single hidden layer.\", \"The algorithm in Function 2 (Appendix A.1) requires quantities labeled 'isx_redundancies' and 'pid_atoms' (lines 732 and 733), but it is not specified how these quantities are computed.\", \"According to Appendix A.3, the proposed method requires a lot of computational power even for MNIST. This raises concerns about its scalability to more complex datasets.\", \"I think the abbreviation IM in Table 1 likely stands for \\\"infomorphic networks,\\\" but this is not clarified in the paper.\", \"**Minor Comments:**\", \"In line 907, the word 'function' is repeated.\", \"I think Figure 9 is neither referenced in the text nor is it fully interpreted beyond the caption. Given its complexity, more discussion could make it easier to understand.\"], \"questions\": [\"Is the activation function in line 283 commonly used? How the $\\\\alpha_i$ and $\\\\beta_i$ (for $i = 1, 2$) values are determined for this activation function?\", \"Regarding lines 293-297, which mention that lateral connections introduce recurrence, how did you determine that presenting the same input twice is sufficient? For more complex datasets, more iterations may be needed. I am curious if the output converges after two presentations.\", \"Is there any particular heuristic or rationale for using 20 equally sized bins in the algorithms in Appendix A.1?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank the authors for their responses.\\n\\nCan the authors elaborate how p(y | f, l, c) is computed? The authors state that the loss can be differentiated w.r.t. p(y | f, l, c) - assuming p(f, l, c) is constant - and p(y | f, l, c) can be differentiated w.r.t. the neuron weights. However, p(y | f, l, c) is not dependent on the weights, because Y is directly given by the activation function defined as A(F,C,L) = F[(1 \\u2212 \\u03b11 \\u2212 \\u03b12) + \\u03b11 \\u03c3(\\u03b21FC) + \\u03b12 \\u03c3(\\u03b22FL)], where alpha and beta are all fixed. Can the authors clarify if I have any misunderstanding? Specifically, how is p(y | f, l, c) and its gradient w.r.t. the weights computed?\"}", "{\"comment\": \"> The PID-based goal functions and infomorphic neurons were originally proposed by Makkeh et al. 2023, and the contribution of this work is to introduce lateral connections as a third input class. However, in the Introduction, the authors claim the PID goal function to be one of the main contributions of this paper. The authors should explain more clearly the difference between this work and Makkeh et al. 2023, and define their contributions more accurately.\\n\\nThe main contribution of this paper is the introduction and study of trivariate PID goal functions, which, while building on the ideas of Makkeh et al., represent a pivotal step which makes infomorphic neurons solve classification tasks better than logistic regression and achieve performance on par with backpropagation. In our research, it has emerged that three different input signals are crucial to local learning: A receptive signal providing the input, a relevance signal helping to filter this input and a distribution signal enabling self-organization between neurons.\\nWe agree with you that the wording in the introduction can be improved to highlight this fact and have made changes to the manuscript accordingly.\\n\\n> Experiments were performed on neural nets with single hidden layer, which limits the scope of the paper. It is unclear whether the observations and insights can generalize to deeper neural networks.\\n\\nPlease refer to the general comment for a response to this point.\\n\\n> To compute the PID atoms during training, the authors empirically evaluated the joint probability mass function of the aggregated inputs and output of each neuron. Since the input is high-dimensional, is a large batch size needed for such numerical estimation to be accurate and stable? For example, the input dimension is at least 28^2 = 784 for MNIST, and each dimension is discretized to 20 levels. Is the batch size of 1024 sufficient?\\n\\nPlease refer to the general comment for a response to this point.\\n\\n> The authors mentioned that the shared-exclusion redundancy and thus the PID atoms are differentiable with respect to the probability distribution. However, in my understanding, the empirical probability mass function of a discrete random variable is not differentiable w.r.t. its samples. Then the goal function will not be differentiable w.r.t. a neuron's weights, which is needed for training. Could the authors clarify this point? In particular, how is the empirical probability mass function differentiated w.r.t. the output (and subsequently the weights) of the neuron?\\n\\nIn line with the approach by Kay and Phillips (1997) and Makkeh et al. (2023), the PID atoms are only differentiated with respect to the conditional probabilities of the neuron's output $Y$. The joint probability mass is constructed in two steps: First, an empirical histogram is created from the realizations of the quantized aggregated variables $F$, $L$ and $C$. For each sample, the infomorphic neuron defines the conditional probability of the neuron's binary output $Y$ given the inputs F, L and C directly, which allows to construct the full probability as $p(f, l, c, y) = p(f, l, c)p(y|f, l, c)$. While it is true that $p(f, l, c)$ is not differentiable with respect to the sources $F$, $L$ and $C$, the conditional probability $p(y|f, l, c)$ varies smoothly with the inputs, making the full probabilitiy distribution differentiable under the assumption that $p(f, l, c)$ remains constant. Despite the fact that we do not currently have a strong a priori argument why gradients of $p(f, l, c)$ can be omitted, the actual changes in PID atoms over training and the good task performance when using these gradients in experiments provides an ex post justification for this procedure. Nevertheless, in ongoing research, we are investigating the possibility of making the full pmf differentiable by means of stochastic quantization of the inputs. We clarified these points in the appendix of the manuscript.\"}", "{\"comment\": \"Thank you for your response.\\n\\nI presumed that you computed $p(y \\\\mid f, c, l)$ for each bin, but it seems that you computed it for each sample. I believe $p(y \\\\mid f, c, l)$ for each bin is needed to calculate the atoms in information decomposition. Could you clarify how $p(y \\\\mid f, c, l)$ for each sample is converted to $p(y \\\\mid f, c, l)$ for each bin?\"}", "{\"title\": \"General Response to the Reviewers\", \"comment\": \"Thank you for your insightful and constructive reviews. Incorporating your suggestions into our manuscript has certainly helped to improve the clarity and precision of our work.\\nTo avoid duplication, we will address questions raised by multiple reviewers in this general comment.\\n\\n**Dimensionality of inputs and estimation of PID atoms**\\n\\nTwo reviewers raised the question if the dimensionality of the input or the batch size of 1024 pose a problem to the estimation of the PID atoms. We believe this not to be the case, as the high-dimensional input or lateral connections are not used in the estimation themselves but only the aggregated scalar values R, C and L which are reduced to a single dimension each by a learned weighted sum. For this reason, the joint probability mass function is only four-dimensional irrespective of the dimensions of the inputs. Furthermore, the probability mass function has empirically been observed to be quite sparse, making 1024 samples sufficient to give a sufficiently good approximation of the PID atoms.\\nWe validate this assumption, we ran MNIST classifier networks with $N=100$ neurons in the hidden layer with smaller and larger batch sizes. Despite matching the learning rates accordingly, in this first test, the runs with larger batch sizes converge significantly slower but reach approximately the same final accuracy (see new appendix section). While determining the optimal batch size remains a question for future research, we thus believe the chosen batch size of 1024 samples to be sufficient for the experiments shown.\\n\\n**Scaling to multiple hidden layers for solving more complex tasks**\\n\\nFurthermore, multiple reviewers have highlighted the importance of analyzing multi-layer networks using infomorphic neurons. We agree that using deeper layers is the logical next step in our research and have expanded on the preliminary results in a new appendix section. These preliminary results show that deeper infomorphic networks can train with the same goal functions as their more shallow counterparts, reaching comparable or slightly higher accuracy when matching the number of learned parameters. Nevertheless, since the goal of this paper is to introduce the framework of trivariate infomorphic networks, a thorough investigation of how optimal goal functions differ between hidden layers is left to future research.\\nTo showcase the generality of PID goal functions, we have in the meantime applied our approach to the AudioMNIST dataset, achieving test accuracies between $94.7\\\\%$ and $96.3\\\\%$ for 10 training runs using the same optimized goal function from the MNIST image recognition task. While the AudioMNIST task may not be significantly more complex, this result showcases the transferability of the local goal function to classification tasks from domains other than image recognition, which likely have very different input statistics.\\n\\n**On the concept of \\\"interpretability\\\"**\", \"we_want_to_emphasize_that_the_more_abstract_notion_of_task_independent_interpretability_of_local_goals_provided_by_infomorphic_networks_is__different_from_interpretability_from_an_application_perspective\": \"The interpretability that infomorphic neurons offer is on the level of the information-theoretic goals of the individual neurons, revealing what information processing on the local level is sufficient to solve a particular global task. These results are expected to depend on the type of task (e.g. classification), but should be transferrable between tasks of the same nature (e.g. MNIST and CIFAR10) and between different mechanism which produce the activation values (e.g., activation function).\\nIn future work, these tools may be used to discover which neurons do or do not contribute to the solving of a global task. Because the used redundancy measure is *local*, meaning it can be evaluated for individual samples, infomorphic neurons may in future work be used to identify particular classification labels for which a neuron does or does not contribute. \\nWe added a paragraph to the paper's discussion to explain this distinction.\"}", "{\"comment\": \"> Is the activation function in line 283 commonly used? How the $\\\\alpha_i$ and $\\\\beta_i$ (for $i=1,2$) values are determined for this activation function?\\n\\nThe activation function in line 283 is an extension of the activation function originally devised by Kay and Phillips. While we have not made a thorough investigation into the effect of the exact choice of parameters for this goal function, we believe that as long as the feedforward and lateral inputs retain their dominant effect on the outputs, the results will likely be very similar. This invariance is demonstrated by the results of the simple linear goal function suggested in Section 4.\\n\\n> Regarding lines 293-297, which mention that lateral connections introduce recurrence, how did you determine that presenting the same input twice is sufficient? For more complex datasets, more iterations may be needed. I am curious if the output converges after two presentations.\\n\\nThank you for this valuable suggestion. Intuitively, we expect a swift convergence of the activations due to the subordinate effect of the lateral connections in the activation function. Nevertheless, in order to validate that the activation values do in fact converge after only two iterations, we conducted addional experiments which confirm that the activations indeed only marginally change after two iterations. We added a brief section to the appendix where we discuss those results as well as the reasons for the fast convergence.\\n\\n> Is there any particular heuristic or rationale for using 20 equally sized bins in the algorithms in Appendix A.1?\", \"the_number_of_bins_needs_to_be_balanced_between_two_opposing_criteria\": \"Too small bin numbers lead to a greater information loss as more different samples are gathered into the same bin. On the other hand, too large bin numbers make the information-theoretic quantities more difficult to estimate reliably. The number of 20 bins has been hand-chosen to balance between these two desiderata, although the effect of choosing slightly fewer or slightly more bins has not yet been conclusively analyzed. We have added an appendix section showcasing and discussing the effects of changing the number of bins.\"}", "{\"comment\": \"I thank the authors for their detailed explanations and revisions. Based on their rebuttal and overall response, I have increased my score from 5 to 6.\"}", "{\"metareview\": \"This work deals with the dissonance between global and local learning rules in neural networks. It takes an information theory approach to enable neurons to steer the integration of different classes of input information to drive it's own update. The way that the different classes of information can be combined can be dictated by the researcher or fit to data. The reviewers noted that the overall conceptual framework was novel and interesting, and especially appreciated the interpretable nature of the model parameters.\\n\\nWhile some minor concerns were raised, these mostly focused on details, e.g., the application to harder problems and questions on the data-intensive nature of information estimation. A successful discussion with reviewers and additional preliminary results on new experiments seem to have overcome these concerns.\", \"additional_comments_on_reviewer_discussion\": \"In response to the reviewer comments, new experiments were run. I do not see concern for those experiments as the core of the model was not changed in response to reviewers, just additional validation.\"}", "{\"summary\": \"The authors claim that modern deep neural networks focus on global optimization and the learning objective for individual neurons are obscure, whereas real-life biological neurons grow with local learning objectives with limited global information. Based on this intuition, the authors propose a biologically inspired neural network that enhances interpretability of the learning dynamics of individual neurons. The authors leverage Partial Information Decomposition (PID) in information theory to formulate objective functions on the individual neuron level, and these neurons are termed \\u201cinfomorphic neurons\\u201d. This novel formulation is able to achieve comparable performance to backpropagation on MNIST and CIFAR10 datasets, showing preliminary signs of utility. Besides, the proposed framework allows for interpretability analysis on the partial information components which is quite unique and interesting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1.\\tThe topic of biologically inspired neural networks has attracted great attention in the recent years. Proposing alternative solutions to the long-dominating backpropagation method is also interesting and profoundly influential. I encourage the authors to work along these lines.\\n2.\\tOverall this is a paper very rich in content.\\n3.\\tThe visualizations in Figure 1 and 2 are very helpful for understanding the partial information components in partial information decomposition.\\n4.\\tThe performance on MNIST and CIFAR10 are quite promising. From what I read, the proposed method achieved beyond 98% accuracy on MNIST and 94.4% accuracy on CIFAR10.\\n5.\\tThe parameter importance assessment is thoughtful and informative.\", \"weaknesses\": \"1.\\tWhile I believe we should not expect super comprehensive experiments from a paper that initially introduce a novel concept or paradigm, it might be even improve the soundness of this paper if the authors can also include the performance on slightly more challenging datasets. If standard large datasets such as ImageNet are too computationally expensive, the authors could consider variants that are decently big and challenging, such as TinyImageNet.\\n2.\\tFor demonstration of experimental results, while I like the richness of Figure 3B and I do notice Figure 8 in the appendix, I would personally argue it would be more straightforward to include a \\u201cless exiting\\u201d bar and whiskers plot of backprop as well as different variants of bivariate and trivariate models. I suspect the authors left the performance of the bivariate model to the appendix to avoid overcrowding the results, but this omission from the main text might trigger confusion from the audience. Again, a good old bar and whisker might be a valid solution.\", \"questions\": \"1.\\tPlease refer to Weakness 2. Would the authors consider the \\u201cless existing\\u201d representation of the quantitative results or provide some other alternative that are similarly straightforward?\\n2.\\tI still have a hard time understanding how the partial information components are realized in actual implementation. It would be good for the authors to provide a brief explanation on how that is done or point to the relevant text in the paper.\\n3.\\tIt is a bit uncommon to have \\u201cRelated Works\\u201d and \\u201cLimitations and Outlook\\u201d as part of the Discussion section. Is that intentional or is it just a typo?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Manuscript Title Change\", \"comment\": \"Following a suggestion by reviewer Mc5F, we have previously changed the title of our work to \\\"Neuron, do your part: Self-organizing global computation from local objective functions based on partial information decomposition\\\" to address the concern that the question \\\"What should a neuron aim for?\\\" in the original title may be understood to imply more extensive analysis of larger networks. However, in communication with colleagues we found that the new title can be misunderstood as implying a global agent that \\\"makes neurons do their part\\\", when focus should be on their local learning and self-organization.\\n\\nShould the reviewers and editors agree, we therefore suggest to revert to the original title \\\"What should a neuron aim for? Designing local objective functions based on information theory\\\". We believe that the first part of this title introduces the topic in a thought-inspiring manner and highlights the generality of the proposed framework in principle, with the second half clearly defining the scope of the present submission. If the reviewers have serious concerns with this original paper title, or if regulations disallow further changes of the title at this point, we are are nevertheless contented with keeping the new suggested title.\", \"the_regulations_regarding_title_changes_remain_unclear_to_us\": \"While the ICLR author guidelines clearly specify that the title can be augmented during the rebuttal phase, there was no option to update the official title in the rebuttal form on the OpenReview website. We thus kindly ask the editor to clarify which title should be used in the camera-ready version in accordance with the regulations in their final decision.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"1. Thanks for pointing out the incorrect quote in my original review. I misread the numbers.\\n2. Thanks for updating Figure 3 and including the more straightforward comparison.\\n3. Thanks for explaining the PID implementation.\\n\\nBest of luck!\"}", "{\"summary\": \"Based on the concept of partial information decomposition in the information theory, this work formulates a learning framework that allows the implementation of per-neuron goal functions and neuronwise interpretability. In an ANN setup for classification tasks, the goal function is optimized via parameters of the \\u2018informorphic\\u2019 neurons, with neuron-level interpretation, good performance (comparable to the same ANN trained with bp) and insights into the local computational goals. Examples with bivariate and trivariate informorphic neurons are demonstrated where the three-input classes unlock the potential of information theoretic learning, validating the abovementioned advantages in comparison with classical information composition.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The use of partial information decomposition in a learning framework with bivariate and trivariate implementations enables interpretable information processing at the per-neuron level, which is also new. The discussion based on a comparison between heuristic and optimization approaches demonstrates the potential of the interpretability of local learning framework in a task-relevant context and without the loss of performance compared to ANN trained with bp.\", \"weaknesses\": \"The title \\u2018What should a neuron aim for\\u2019 is broader than the scope explored in the current work, where a single-layer bivariate and trivariate local learning framework is studied. There is still a large gap between the interpretability of the model here and that of the neuron. The advantage of interpretability from an application perspective is not demonstrated or discussed, which is expected to be very limited by the single-layer simplicity here.\", \"questions\": \"How do we understand the performance presented in Fig. 3B by considering that at Nhid = 100, there is a convergence for all results? Why a sparse lateral outperform a dense lateral under large Nhid conditions? What are the computational and memory costs?\\nWhat does the extraction of high-effect parameters mean, and would this be useful to construct networks with higher performance, or understand the nature of the problems under training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a local learning framework for neural networks by introducing a local training objective inspired by partial information decomposition (PID) in information theory.\\nIt trains each neuron individually by a local goal function that is a linear combination of PID atoms, which decomposes the information that three input signals \\n(feedforward, context, and lateral) provide about the neuron's output. The weights of the combination can be chosen heuristically or optimized on validation data. The weights represent \\nhow information in the inputs contribute to the neuron's output and are human-interpretable. Experiments on a single-hidden-layer network show that the proposed learning method \\ncan achieve similar performance to conventional training by cross-entropy loss, yet providing interpretable learning process of individual neurons.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The idea of using information-theoretic loss functions to train individual neurons is interesting, with nice connection to biological neural networks and neuroscience.\\n\\nThe novelty of adding a third input class representing lateral connections between neurons is intuitive and interesting and demonstrates better empirical performance than the bivariate model.\\n\\nThe overall method and motivation are clearly presented.\\n\\nThe experiments are well designed. Nice ablation study on the goal parameters. Good interpretation of \\\"important\\\" PID atoms identified by goal parameter optimization.\", \"weaknesses\": \"Some parts of the methodology are not clear. Please see questions below.\\n\\nThe PID-based goal functions and infomorphic neurons were originally proposed by Makkeh et al. 2023, and the contribution of this work is to introduce lateral connections as a third input class.\\nHowever, in the Introduction, the authors claim the PID goal function to be one of the main contributions of this paper. The authors should explain more clearly the difference between this work and Makkeh et al. 2023, \\nand define their contributions more accurately.\\n\\nExperiments were performed on neural nets with single hidden layer, which limits the scope of the paper. It is unclear whether the observations and insights can generalize to deeper neural networks.\", \"questions\": \"Questions related to numerical computation of PID atoms:\\n- To compute the PID atoms during training, the authors empirically evaluated the joint probability mass function of the aggregated inputs and output of each neuron. Since the input is high-dimensional, is a large batch size \\nneeded for such numerical estimation to be accurate and stable? For example, the input dimension is at least 28^2 = 784 for MNIST, and each dimension is discretized to 20 levels. Is the batch size of 1024 sufficient?\\n\\n- The authors mentioned that the shared-exclusion redundancy and thus the PID atoms are differentiable with respect to the probability distribution. However, in my understanding, the empirical probability mass function \\nof a discrete random variable is not differentiable w.r.t. its samples. Then the goal function will not be differentiable w.r.t. a neuron's weights, which is needed for training. Could the authors clarify this point?\\nIn particular, how is the empirical probability mass function differentiated w.r.t. the output (and subsequently the weights) of the neuron?\\n\\nThere seems to be a gap between the claimed \\\"neuron-level interpretability\\\" and the proposed learning framework. \\nUsually, interpretability means understanding how a trained network makes predictions and what the network has learned.\\nHowever, the method proposed in this paper only provides interpretation for the learning objective, rather than insight of what the model and each neuron has learned. \\nFurthermore, the interpretability is still global, not neuron-level, as the goal parameters are shared across all neurons, i.e., the interpretation for all neurons are the same. \\nFinally, the cross-entropy loss and the backpropagation process are quite human interpretable, in my opinion.\\nCould the authors comment on this?\\n\\n\\\"For the supervised classification task at hand, the function ... has been chosen ... This ensures that the network performs similarly during training, when context and lateral inputs are provided, and for evaluation, \\nwhere the context signal is withheld.\\\"\\nIt seems that the given activation function is for training, since it uses the context (label) as input. Can the authors define clearly what the activation function during testing is?\\n\\n\\\"One promising path towards constructing deeper networks is using stacked hidden layers that receive feedback from the next layer, similar to setup 3.\\\"\\nIt is a bit unconvincing to replace label feedback by feedback connection from the next layer. In the latter, the neuron's learning goal is to capture the part of the feedforward signal that agrees with the next layer's output, which is less intuitive than \\ncapturing the part that agrees with the label.\\n\\nFigure 8, comparision with bivariate model: Since the goal is to compare trivariate with bivariate, it might be better to put trivariate and bivariate on the same axis. If it'd be too crowded, \\nmaybe group the results by \\\"Heuristic\\\"/\\\"Optimized\\\".\\n\\nIn \\\"Goal parameters\\\" paragraph in Experiments, it might be better to state that \\\"heuristic goal function\\\" is $\\\\Pi_{ \\\\{ F\\\\} \\\\{C\\\\}}$. Although it can be inferred from Fig. 4, it's better to define it in the text as well.\\n\\nIn Fig. 4, is the difference in validation accuracy defined as \\\"after setting to 0 - before\\\"?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
CL3U0GxFRD
Exponential Topology-enabled Scalable Communication in Multi-agent Reinforcement Learning
[ "Xinran Li", "Xiaolu Wang", "Chenjia Bai", "Jun Zhang" ]
In cooperative multi-agent reinforcement learning (MARL), well-designed communication protocols can effectively facilitate consensus among agents, thereby enhancing task performance. Moreover, in large-scale multi-agent systems commonly found in real-world applications, effective communication plays an even more critical role due to the escalated challenge of partial observability compared to smaller-scale setups. In this work, we endeavor to develop a scalable communication protocol for MARL. Unlike previous methods that focus on selecting optimal pairwise communication links—a task that becomes increasingly complex as the number of agents grows—we adopt a global perspective on communication topology design. Specifically, we propose utilizing the exponential topology to enable rapid information dissemination among agents by leveraging its small-diameter and small-size properties. This approach leads to a scalable communication protocol, named ExpoComm. To fully unlock the potential of exponential graphs as communication topologies, we employ memory-based message processors and auxiliary tasks to ground messages, ensuring that they reflect global information and benefit decision-making. Extensive experiments on large-scale cooperative benchmarks, including MAgent and Infrastructure Management Planning, demonstrate the superior performance and robust zero-shot transferability of ExpoComm compared to existing communication strategies. The code is publicly available at [https://github.com/LXXXXR/ExpoComm](https://github.com/LXXXXR/ExpoComm).
[ "multi-agent reinforcement learning", "communication" ]
Accept (Poster)
https://openreview.net/pdf?id=CL3U0GxFRD
https://openreview.net/forum?id=CL3U0GxFRD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zHWdzLn8vy", "zFSVkEhg0R", "yCA1YUuQkV", "tKC8F3tfWq", "pSriLiDyvl", "pEeBbxwwwR", "guGTGhgDDN", "eBz2f4WzpH", "b86UWKme4h", "aUOpSoFr2r", "Wr7NyfgdMx", "WTOPfmbfWy", "RmQilNqqWo", "O30CVE5KVy", "M6n1WaoQxo", "Er5mxlt49g", "C1KhPQ0aJO", "BudxVmoeSb", "5Rmn4XSwaW" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732629224003, 1732656165945, 1732176557473, 1732675672824, 1732176352354, 1734847815227, 1732696143958, 1737524080132, 1732176426432, 1732176652205, 1733106425452, 1731100487888, 1730327366851, 1732176606864, 1732675433565, 1732695445944, 1730810624490, 1732538820494, 1730696465825 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10832/Reviewer_9jpX" ], [ "ICLR.cc/2025/Conference/Submission10832/Reviewer_MSLo" ], [ "ICLR.cc/2025/Conference/Submission10832/Authors" ], [ "ICLR.cc/2025/Conference/Submission10832/Authors" ], [ "ICLR.cc/2025/Conference/Submission10832/Authors" ], [ "ICLR.cc/2025/Conference/Submission10832/Area_Chair_7zdC" ], [ "ICLR.cc/2025/Conference/Submission10832/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10832/Authors" ], [ "ICLR.cc/2025/Conference/Submission10832/Authors" ], [ "ICLR.cc/2025/Conference/Submission10832/Authors" ], [ "ICLR.cc/2025/Conference/Submission10832/Reviewer_MSLo" ], [ "ICLR.cc/2025/Conference/Submission10832/Reviewer_6QM3" ], [ "ICLR.cc/2025/Conference/Submission10832/Authors" ], [ "ICLR.cc/2025/Conference/Submission10832/Authors" ], [ "ICLR.cc/2025/Conference/Submission10832/Reviewer_6QM3" ], [ "ICLR.cc/2025/Conference/Submission10832/Reviewer_9jpX" ], [ "ICLR.cc/2025/Conference/Submission10832/Authors" ], [ "ICLR.cc/2025/Conference/Submission10832/Reviewer_56ww" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the authors' response and the updates to the paper. Including the limitations enhances the reader's understanding, and the added images and details about the experimental setup improve clarity. I recommend revising the paper to address any remaining typographical and grammatical errors.\"}", "{\"comment\": \"Thank you to the authors for their responses and updates on the paper. Everything is clear now, and I have no further questions. I remain inclined to recommend acceptance of this paper.\"}", "{\"title\": \"(1/2)\", \"comment\": \"Thank you for your constructive feedback. Regarding your questions and suggestions, we have updated the manuscripts accordingly (highlighted in blue) and would like to provide clarifications below. If you have any follow-up questions or comments, please let us know, and we will be happy to discuss further.\\n\\n**Q1:** \\n> It is unclear how the proposed exponential graph can be helpful for improving agent communication under target scenario. Providing some theoretically analysis or motivating example would be helpful. The toy example is helpful, but random global communication is not considered there.\\n\\n**A1:** The proposed ExpoComm enhances agent communication by introducing an topology that propagates information among all agents effectively and at low cost. To support this:\\n- We analyze the exponential graph properties in Section 3.1.3 (lines 251-292). Specifically, the effective information propagation is enabled by the small-diameter property ($\\\\text{diameter}(\\\\mathcal{G}^t) = \\\\lceil \\\\log_2{(N-1)} \\\\rceil$) and the low cost is ensured by the small-size property ($\\\\lvert \\\\mathcal{E}^t \\\\rvert = N$ for one-peer exponential graph). Following suggestion from reviewer 56ww, we have supplemented the theoretical analysis in Appendix A to provide further support for these properties.\\n- A toy example in Figure 2 illustrates message dissemination with different graph topologies. We demonstrate a trade-off between graph diameter and size, reflecting the balance between communication performance and overhead in many-agent systems. Exponential topologies strike a balance in this trade-off, showing strong information diffusion even with a minimal communication budget of $N$.\\n- Extensive experimental results on benchmarks MAgent and IMP in Figure 4 and Table 1 show that ExpoComm outperforms baseline algorithms with the same communication budgets, demonstrating its effectiveness.\\n\\nTo better address the reviewer\\u2019s concerns and clear up any misunderstanding, could reviewer 56ww kindly clarify what \\\"random global communication\\\" refers to in the original review?\\n\\n**Q2:** Beside the exponential graph, the contribution of this work seems limited. Highlighting and clarifying the contributions of this work would be helpful for better understanding.\\n\\n**A2:** Thank you for this suggestion. This work addresses a research gap in scalable multi-agent communication, as most existing strategies are designed for and tested under small-scale systems. Many real-world applications [1,2,3] require communication strategies that scale to dozens or even hundreds of agents. To address this gap, we made the following contributions:\\n- We propose an exponential topology-enabled communication protocol, ExpoComm, as a scalable solution for MARL communication. It supports effective message dissemination among agents at low cost, enabled by the small-size and small-diameter properties of exponential graphs.\\n- To fully leverage these properties for efficient information dissemination, we employ memory-based blocks for message processing and auxiliary tasks to ground messages, ensuring they effectively reflect global information.\\n- Through extensive experiments across twelve scenarios on large-scale benchmarks, including MAgent and Infrastructure Management Planning (IMP), we demonstrate the superior performance and transferability of ExpoComm over existing baseline methods, handling large numbers of agents up to a hundred.\\n\\n**Q3:** Evaluation seems incomplete. Comparing the proposed method to the traditional broadcast communication methods can help make the claim more persuasive.\\n\\n**A3:** For comparisons with broadcast communication methods, please refer to A4 below. Regarding multicast communication, our original manuscript compares ExpoComm with traditional multicast methods, including CommFormer, ER, DGN+TarMAC, using two different communication budgets ($K = \\\\lceil \\\\log_2{N} \\\\rceil$ and $K=1$). The results in Figure 4 and Table 1 demonstrate the superior performance of ExpoComm compared to these baselines. Please see Section 4.2 for a more detailed discussion.\"}", "{\"comment\": \"We are pleased to hear that your initial concerns have been addressed, and we will continue proofreading the paper to resolve any remaining typos. Once again, we sincerely thank you for your time and effort in providing valuable feedback to help us improve our work.\"}", "{\"comment\": \"Thank you for your positive review. Regarding your questions and suggestions, we have updated the manuscripts accordingly (highlighted in blue) and would like to provide clarifications below. If you have any follow-up questions or comments, please let us know, and we will be happy to discuss further.\\n\\n**Q1:** \\n>The presentation of section 3.3 is not ideal, e.g. line 323, can you elaborate more on the prediction function f? line 354 to line 355, what is t'?\\n\\n**A1:** \\nThank you for your suggestion. We have improved the readability of Section 3.3 by adding subtitles and rewording the relevant content. Please see the updated section highlighted in blue. The function $f(\\\\cdot; \\\\phi)$ refers to the learnable prediction function used for grounding messages. It is implemented using a two-layer MLP in our experiments (Appendix B.1, line 809). In Equation 5, $t$ and $t'$ refer to the timesteps of negative data pairs.\\n\\n**Q2:** \\n> In figure 4(f), why does the test win rate of ExpoComm start to drop from 4*1e6 step?\\n\\n**A2:** \\nWe acknowledge that there is some performance fluctuation for ExpoComm throughout training in the Battle scenario. Similar fluctuations are also observed for baselines in the same scenario, such as IDQN (green solid line) at $4.5 \\\\times 10^6$ timesteps in Battle w/ 64 agents, DGN+TarMAC (green solid line) at $2.6 \\\\times 10^6$ timesteps in Battle w/ 20 agents and CommFormer (purple solid line) at $4.3 \\\\times 10^6$ timesteps in Battle w/ 20 agents. We suspect this fluctuation may be due to the higher sensitivity of the win rate metric compared to return, as there were no outstanding abnormal patterns in return or loss curves for ExpoComm or the baselines.\\n\\n**Q3:** \\n> This algorithm should excel in large-scale multi-agent environment, so can we suppose the performance gap between the algorithm proposed and other baselines should increase when the number of agents increase? If so, why can't we see such trend in figure 4?\\n\\n**A3:** To some extent, this is true. Such implications can be observed from the AdversarialPursuit scenario in Figure 4, Uncorrelated and Correlated scenario in Table 1. However, we refrained from explicitly making this claim in our manuscript due to the challenges in rigorously defining the performance gap across different settings. Specifically:\\n- Comparing returns (or their differences) directly across different settings is problematic, as the returns are defined under different conditions.\\n- It is unclear which baseline should be selected as the reference algorithm when calculating such a performance gap.\"}", "{\"metareview\": \"This paper studies the very timely and important problem of efficient communication in multi-agent RL. This is an under-studied topic in the RL community that deserves more attention. The proposed approach based on exponential topology is refreshing and brings new connections that likely will likely spawn interesting follow-up works. Particularly, the proposed framework leverages communication topologies with small diameters for fast information dissemination and small graph sizes for reduced communication overhead. This design allows the system to maintain roughly linear communication costs relative to the number of agents, and serves as a decent baseline for follow-up works.\", \"additional_comments_on_reviewer_discussion\": \"NA\"}", "{\"comment\": \"Thank you for your kind response. We sincerely appreciate the time and effort you have invested in providing valuable feedback to help improve our work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your thoughtful review. We have updated the manuscript accordingly (highlighted in blue) and provide clarifications below. If you have any follow-up questions or comments, please let us know, and we will be happy to discuss further.\\n\\n**Q1:** A more explicit discussion of the limitations: tasks reqiuring more targeted communication or non-cooperative tasks.\\n\\n**A1:** Thank you for the suggestion. We have included a subsection discussing limitations and future work (see Appendix C.3).\\n- We acknowledge that ExpoComm may not perform well in scenarios requiring more targeted communication, network MDPs, or non-cooperative tasks. We have identified possible paths to further improve communication performance in many-agent systems.\\n- Regarding the specific pairwise connectivity mentioned by reviewer 9jpX, this is indeed a very interesting question. Generally, there is a trade-off between adopting a global or local (pairwise) perspective when designing communication strategies in MASs of different scales. A local perspective, which focuses on task-oriented pairwise connectivity, can enhance task-specific performance in small-scale MASs, as shown in previous work [1]. However, it becomes extremely challenging to learn such relationships in large-scale MASs because the number of communication pairs scales quadratically with the number of agents. This observation motivates our ExpoComm, which adopts a global perspective in designing communication topology. While ExpoComm performs well in large-scale many-agent systems, it may not excel in scenarios requiring highly targeted communication. A promising direction for future work would be to design a mechanism that enables a seamless transition between global and local perspectives. Such a mechanism could potentially improve the adaptability of multi-agent communication schemes, allowing them to perform effectively across a wider range of scenarios. We leave this exploration for future work.\\n\\n**Q2:** More details and visualization about the experimental setups.\\n\\n**A2:** Thank you for the helpful suggestion! We have made the following updates to address this point:\\n- Expanded Appendix B.2 to include more descriptions of the environmental settings along with snapshots (Figures 7 and 8) to improve clarity and readability.\\n- Supplemented visualization results for ExpoComm and IDQN (without communication) in both AdversarialPursuit and Battle scenarios in Appendix C.1. These visualizations demonstrate that ExpoComm enhances cooperation by enabling agents to adopt a global perspective. For example, agents exhibit behaviors such as surrounding opponents and allocating more agents to the front lines, even when communication budgets are low ($K=1$). Please see Figures 9, 10, and Appendix C.1 for visualization results and more detailed discussion.\\n\\n**Q3:** \\n> In Figures 4 and 6, the x-axis is labeled as \\\"test return,\\\" although it appears to show plots related to training return. Could the authors clarify this discrepancy?\\n\\n**A3:** Thank you for pointing this out. To avoid any ambiguity, we have updated all y-axis labels to \\\"evaluation return\\\" or \\\"evaluation win rate\\\" (see Figures 4, 6, and 11). To clarify, the y-axis represents the evaluation return during the training process, which is induced by the learned policy without exploration (taking the argmax of Q functions), while the training return is induced by the learned policy with exploration (using epsilon-greedy strategies). \\n\\n-----\\n\\nReferences\\n\\n[1] Shengchao Hu, Li Shen, Ya Zhang, and Dacheng Tao. Learning multi-agent communication from graph modeling perspective. In Proceedings of the 12th International Conference on Learning Representations, 2024.\"}", "{\"comment\": \"Thank you for your positive review. We have updated the manuscript accordingly (highlighted in blue) and provide detailed clarifications below. If you have any follow-up questions or comments, please let us know, and we will be happy to discuss further.\\n\\n**Q1:** Communication overhead.\\n\\n**Q1.1:**\\n> The definition of communication overhead in this paper is rather ambiguous... Does communication overhead in this paper refer to number of messages passed? Can you provide a definition in-text?\\n\\n**A1.1:** We apologize for the ambiguity. Yes, in our manuscript, communication overhead refers to the number of messages passed between agents. We have clarified this definition in Section 3.1.2 and reorganized the experiment setups in Section 4.1 to include a detailed explanation of the communication budget settings.\\n\\n**Q1.2:** Does ExpoComm result in lower numbers of messages passed between agents?\\n\\n**A1.2:** In our experimental setup, we maintain equal communication budgets (number of messages passed) across all methods except communication-free baselines. This design allows us to fairly evaluate the effectiveness of different communication strategies. Our results demonstrate that ExpoComm achieves superior performance while operating under the same communication constraints as baseline algorithms.\\n\\n**Q1.3:**\\n> Are there additional communciation savings that can be achieved by your method beyond its exponential graph structure?\\n\\n**A1.3:** While our current implementation focuses on the exponential graph structure, there are promising avenues for additional communication savings. These include learning low-entropy messages [1] and exploiting temporal communication sparsity, which we discuss as future directions in Appendix C.3.\\n\\n**Q1.4:**\\n> Did you experiment with more K values beyond log2N and 1? If so, what did you observe and why did you believe these two values in the paper would be representative examples?\\n\\n**A1.4:** Our choice of $K=\\\\log_2N$ and $K=1$ was primarily motivated by theoretical considerations rather than experimental observations. These values represent logarithmic growth and constant complexity respectively, which are particularly relevant for scaling to large-scale multi-agent systems (MASs). Traditional approaches using fully-connected graphs ($K=N$) or sparse connections [2] ($K=\\\\text{sparsity} \\\\cdot N$, where $\\\\text{sparsity}$ is a constant) result in quadratic overall communication costs, becoming impractical for large values of $N$ (e.g., $N=100$). By contrast, our approach deliberately explores sub-polynomial scaling through logarithmic and constant functions, offering more efficient alternatives for large-scale deployments.\\n\\n\\n**Q2:**\\n> Do you have any theories why CommFormer performs so terribly in the Adversarial Pursuit 25 agent case? It performs similarly to ExpoComm in the Battle Environment 20 agent case; I would be interested to hear your theories on why it doesn\\u2019t generalize well or performs similarly to ExpoComm in that one case.\\n\\n**A2:** Thank you for this thoughtful question. The issue might relate to the asymmetric nature of the AdversarialPursuit tasks. Agents move slower than adversaries and face penalties for failed tagging attempts, which can lead them to become inactive (choose to do nothing rather than trying to tag adversaries). Our analysis shows that CommFormer's training loss and gradients quickly stabilize at low values, indicating early overfitting [3]. This is likely because CommFormer has a larger number of parameters (3-5 times more than other methods), making it more prone to overfitting in this scenario.\\n\\n**Q3:**\\n> The performance of ER and ExpoComm are very smilar in the filled bar transferability cases (i.e. K=log2N). Do you have theories about why this is?\\n\\n**A3:** Thank you for raising this insightful point! We also observed this phenomenon and included a brief discussion in Section 4.2 (lines 476\\u2013479). The strong transferability of both ER and ExpoComm shall result from the global grounding of messages, a design choice we implemented for both our proposed ExpoComm and the ER baseline.\\n\\n-----\\n\\nReferences\\n\\n[1] Rundong Wang, Xu He, Runsheng Yu, Wei Qiu, Bo An, and Zinovi Rabinovich. Learning efficient multi-agent communication: An information bottleneck approach. In Proceedings of the 37th International Conference on Machine Learning, pp. 9908\\u20139918, 2020.\\n\\n[2] Shengchao Hu, Li Shen, Ya Zhang, and Dacheng Tao. Learning multi-agent communication from graph modeling perspective. In Proceedings of the 12th International Conference on Learning Representations, 2024.\\n\\n[3] Evgenii Nikishin, Junhyuk Oh, Georg Ostrovski, Clare Lyle, Razvan Pascanu, Will Dabney, and Andr\\u00b4e Barreto. Deep reinforcement learning with plasticity injection. In Advances in Neural Information Processing Systems, volume 36, 2024.\"}", "{\"title\": \"Summary\", \"comment\": [\"We sincerely thank all reviewers for their insightful comments and valuable feedback.\", \"In this work, we address the challenge of scalable communication in multi-agent reinforcement learning (MARL) and introduce ExpoComm, an exponential topology-enabled communication protocol. Our framework leverages communication topologies with small diameters for fast information dissemination and small graph sizes for reduced communication overhead. This design enables effective and scalable communication strategies that achieve superior performance and strong transferability, while maintaining (near-)linear communication costs relative to the number of agents.\", \"We are encouraged by the reviewers\\u2019 recognition of various aspects of our work. Specifically, we are pleased that our research question was considered **helpful for MARL research** (56ww, 6QM3), our method was recognized as **innovative** (9jpX, 6QM3) and **well-motivated** (9jpX), our experiments were regarded as **extensive** (MSLo, 9jpX, 6QM3), and our presentation was found **well-organized** (9jpX, 6QM3) and **easy to follow** (56ww).\", \"In response to the reviewers' comments and suggestions, we have provided detailed point-by-point responses and made the following key updates to the manuscript:\", \"**Explicit definition of communication costs** in Section 3.1.1 and Section 4.1 to enhance clarity and readability\", \"**Theoretical analysis** in Appendix A to support the small-diameter property of exponential topologies\", \"**Detailed descriptions of environmental settings** in Appendix B.2 to improve the clarity\", \"**Visualization results** in Appendix C.1 to illustrate the cooperation patterns induced by the proposed communication strategies\", \"**Experimental comparison with CommNet** in Appendix C.2 to demonstrate the superior performance and lower cost of ExpoComm compared to proxy-based communication methods\", \"**Discussion on limitations** in Appendix C.3 to highlight potential future research directions\", \"**During the rebuttal period, we believe we adequately addressed all questions and concerns raised by reviewers.** We are grateful that reviewers MSLo, 9jpX, and 6QM3 acknowledged the improvements made to the manuscript. We sincerely thank the reviewers, ACs, SACs, and PCs for their time and efforts in evaluating our work.\"]}", "{\"summary\": \"This paper proposes to utilize the exponential topology to enable rapid information dissemination among agents, which leads to scalable communication protocol ExpoComm. A memory-based message processors are employed and auxiliary loss is introduced to ground message. Experiments are conducted to validate their algorithm.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Extensive experiments\", \"Superior performance against other baselines\"], \"weaknesses\": [\"The presentation of section 3.3 is not ideal, e.g. line 323, can you elaborate more on the prediction function f? line 354 to line 355, what is t'?\"], \"questions\": [\"In figure 4(f), why does the test win rate of ExpoComm start to drop from 4*1e6 step?\", \"This algorithm should excel in large-scale multi-agent environment, so can we suppose the performance gap between the algorithm proposed and other baselines should increase when the number of agents increase? If so, why can't we see such trend in figure 4?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work focuses on the problem of learning scalable communication in many-agent systems through multi-agent reinforcement learning. To tackle this problem, this work uses exponential graphs to model the communication topology, and memory-based message processors and message grounding for information representation. Through examples and baseline comparisons, the authors demonstrate that exponential graphs can balance the trade-off between dissemination speed and redundancy; thereby allowing agents to receive messages from other connected members in the graph without additional communication overhead.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The problem this paper seeks to address is an interesting and important issue in multi-agent reinforcement learning regarding scalable communication. The idea of using exponential graphs in this application is novel to me, and I appreciated the way that the authors helped the reader build intuition for the benefits ofgraph design in section 3.1; an exponential I thought this was clear and very well-written. The experiments on the transferability of the method were very thorough, and I appreciated the comparisons for both K cases. The ablation studies served to reiterate the author's point and useful aspects of their architecture.\", \"weaknesses\": \"Overall, my concerns with this paper lie in the framing regarding communication overhead.\\n- The definition of communication overhead in this paper is rather ambiguous. It would probably be best to add a formalized definition, as communication overhead has very field-specific connotations beyond computer science. While addressed by the authors in the introduction, it is possible to reduce communication overhead by only communicating with a subset of peers. If ExpoComm does produce lower communication overhead, I think this paper could be strengthened significantly by adding additional metrics where the number of messages (or unique messages) ExpoComm has sent is directly compared against another method that only communicates with a subset of agents (eg DGN). From my understanding, Table 1 lacks these comparisons now. If ExpoComm does not result in lower numbers of messages passed between agents against comparable baselines (eg. MAGIC, DGN), then I would recommend clarifying the wording throughout the paper, to be in line with your definition of communication overhead.\", \"questions\": [\"Do you have any theories why CommFormer performs so terribly in the Adversarial Pursuit 25 agent case? It performs similarly to ExpoComm in the Battle Environment 20 agent case; I would be interested to hear your theories on why it doesn\\u2019t generalize well or performs similarly to ExpoComm in that one case.\", \"The performance of ER and ExpoComm are very smilar in the filled bar transferability cases (i.e. K=log2N). Do you have theories about why this is?\", \"Did you experiment with more K values beyond log2N and 1? If so, what did you observe and why did you believe these two values in the paper would be representative examples?\", \"Does communication overhead in this paper refer to number of messages passed? Can you provide a definition in-text? Are there additional communciation savings that can be achieved by your method beyond its exponential graph structure? (i.e. savings due to pruning or shorter timesteps or less redundant agents communicated with to compared to other techniques)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"(2/2)\", \"comment\": \"**Q4:** How does the proposed method performs compared to traditional mulitcast type of multi-agent communication method such as CommNet, as it seems more global communication is beneficial for the target scenarios?\\n\\n**A4:** Thank you for the question, and we would like to provide the following clarifications:\\n- Unlike ExpoComm, CommNet requires a physical communication proxy to aggregate messages from all agents during execution. This introduces additional hardware requirements for MASs, which is beyond the focus of our work. Without such a proxy, CommNet incurs $N$ (20 to 100) times more communication overhead compared to the $K=1$ case in our experiments. As a result, a direct comparison with CommNet is inherently unfair to ExpoComm and the other baseline methods.\\n- However, we recognize that this comparison can provide useful insights into the properties of different tasks. And we have included these comparisons in Figure 11 and Table 6 in Appendix C.2. Despite incurring significantly lower communication costs, ExpoComm outperforms CommNet in most scenarios. Interestingly, CommNet shows comparable performance to ExpoComm in AdversarialPursuit tasks, suggesting that a global perspective is crucial in this scenario. This protentially explains the larger performance gap between ExpoComm and other baselines in this scenario. \\n- Unlike global communication strategies that rely on physical proxies (e.g., CommNet), ExpoComm achieves global communication in decentralized MASs through a carefully designed communication topology. This emphasizes the versatility and scalability of ExpoComm, as it can achieve effective communication without the need for centralized infrastructure.\\n\\n**Q5:** How does the proposed exponential graph compared with random selection of communication peers? (keeping the number of communication peers the same)?\\n\\n**A5:** This corresponds to the ER baseline in our manuscript, with results presented in Figure 4 and Table 1. Overall, ExpoComm outperforms the ER method in most scenarios. Please see Section 4.2 for a more detailed discussion.\\n\\n**Q6:** What are the limitation of this method? Based on the results in Figure 4, the benefit of the method is more obvious for AdversarialPursuit than for Battle. It seems it's only learning after for the Battle environments. Is it learning faster for Battle because there is less communication peers for each node?\\n\\n**A6:** Thank you for your questions. We share our thoughts regarding these questions here:\\n- Limitations: We have updated the manuscript with a subsection discussing limitations and future work (see Appendix C.3). We acknowledge that ExpoComm may not perform well in scenarios requiring more targeted communication, network MDPs, or non-cooperative tasks. We also suggest possible paths to further improve communication performance in many-agent systems.\\n- Performance in AdversarialPursuit: The larger performance gap may be due to the stronger need for global information in AdversarialPursuit. This is supported by the superior performance of CommNet (see A4) and visualization results in Figure 9. In these tasks, agents move slower than adversaries, requiring more coordinated behaviors and a global perspective to trap adversaries effectively. ExpoComm provides this global perspective, making it particularly well-suited for tasks that demand strong coordination.\\n- We are not entirely sure we fully understand the remaining part of the question. Could reviewer 56ww clarify what \\\"it's only learning after for the Battle environments\\\" means and what \\\"it\\\" refers to in the next sentence?\\n\\nWe hope these responses and additional experiments address your concerns and encourage you to consider a more favorable evaluation of our paper.\\n\\n----\\n\\nReferences\\n\\n[1] Kai Cui, Anam Tahir, Gizem Ekinci, Ahmed Elshamanhory, Yannick Eich, Mengguang Li, and Heinz Koeppl. A survey on large-population systems and scalable multi-agent reinforcement learning. arXiv preprint arXiv:2209.03859, 2022.\\n\\n[2] Lukas M Schmidt, Johanna Brosig, Axel Plinge, Bjoern M Eskofier, and Christopher Mutschler. An introduction to multi-agent reinforcement learning and review of its application to autonomous mobility. In IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), pp. 1342\\u20131349. IEEE, 2022.\\n\\n[3] Chengdong Ma, Aming Li, Yali Du, Hao Dong, and Yaodong Yang. Efficient and scalable reinforcement learning for large-scale network control. Nature Machine Intelligence, pp. 1\\u201315, 2024.\"}", "{\"comment\": \"We are delighted to hear that you now find everything clear, and we sincerely appreciate your support. Once again, thank you for your time and effort in helping us improve our work.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you to the authors for their responses and updates on the paper. I have no further questions. I remain inclined to recommend acceptance of this paper.\"}", "{\"summary\": \"The paper introduces ExpoComm, a scalable communication protocol for multi-agent reinforcement learning that uses exponential graph topologies to efficiently manage information flow among numerous agents in large-scale environments. Exponential graphs offer a small-diameter structure enabling ExpoComm to have rapid and cost-effective communication across agents. The method overcomes the need to finding task specific pairwise communication. The authors utilize a memory based message network for processing messages over time and to allow agents to accumulate and utilize past information. In addition, auxiliary tasks are used to align messages with global information, either through direct access to the global state (when available) or through contrastive learning techniques. The method is evaluated on MAgent gridworld benchmarks and compared with baselines having varying communication protocols. The one-peer verison of the exponential graph performed the best despite only requiring a linear scaling communication cost.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The use of exponential graphs as a communication topology in MARL is an innovative approach. Combining memory-based message processing and auxiliary tasks to enhance message relevance is also a strong contribution. The authors have performed extensive experiments with multiple baselines. The zero-shot transferability demonstrates a level of generalization. The method\\u2019s scalability and efficiency in managing communication in large agent populations without sacrificing performance have implications for large-scale MARL applications. Overall, the manuscript is well-organized and clearly presents both the motivation and implementation details of the proposed approach.\", \"weaknesses\": \"The paper would benefit from a more explicit discussion of the limitations of ExpoComm. Specifically, scenarios where the proposed exponential topology may not be ideal---such as tasks requiring task-specific pairwise communication links or settings where agents are non-cooperative or adversarial---are not fully addressed.\", \"questions\": \"The proposed exponential topology works well for cooperative tasks. However, could the authors clarify how this approach might be adapted for tasks where agents require specific pairwise connectivity? For instance, if a task necessitates more targeted information sharing between certain agents due to task-specific roles, would ExpoComm accommodate such requirements?\\n\\nHow would the proposed protocol perform in scenarios where some agents are adversarial or non-cooperative?\\n\\nCould the authors provide more details about the experimental setups? Including environment visualizations or schematic diagrams would be extremely helpful for understanding the experimental conditions. Such visual aids could illustrate the setup, agent interactions, and communication patterns in more detail.\\n\\nIn Figures 4 and 6, the x-axis is labeled as \\\"test return,\\\" although it appears to show plots related to training return. Could the authors clarify this discrepancy?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"As the rebuttal period is coming to an end, we would like to thank you again for your valuable feedback. In our rebuttal, we have:\", \"Extended the theoretical analysis in Appendix A.\", \"Added a comparison with the proxy-based method CommNet in Figure 11 and Table 6 (Appendix C.2).\", \"Included a discussion on the limitations of our approach in Appendix C.3.\", \"Addressed and clarified the other questions raised in your review.\", \"We hope that our responses, along with the improvements in the revised manuscript, have sufficiently addressed your concerns. If this is the case, we would greatly appreciate it if you could consider updating your review score. If there are any remaining questions or concerns, please do not hesitate to let us know. Thank you again for your time and insights.\"]}", "{\"summary\": \"The work focuses on the improving communication in large scale multi-agent reinforcement learning. The authors propose using exponential topology as a communication pattern among agents. The authors show that this method can improve the multi-agent performance in a large scale environment using the MAgent and Infrastructure Management Planning environments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The author studies the problem of improving large scale MARL communication, which can be helpful for MARL research and deployment.\", \"The paper is well written and easy to follow. The toy example and figures presented is helpful for understanding.\", \"The proposed method seems to show improved performance for the AdversarialPursuit case.\"], \"weaknesses\": [\"It is unclear how the proposed exponential graph can be helpful for improving agent communication under target scenario. Providing some theoretically analysis or motivating example would be helpful. The toy example is helpful, but random global communication is not considered there.\", \"Beside the exponential graph, the contribution of this work seems limited. Highlighting and clarifying the contributions of this work would be helpful for better understanding.\", \"Evaluation seems incomplete. Comparing the proposed method to the traditional broadcast communication methods can help make the claim more persuasive.\"], \"questions\": [\"How does the proposed method performs compared to traditional mulitcast type of multi-agent communication method such as CommNet, as it seems more global communication is beneficial for the target scenarios?\", \"How does the proposed exponential graph compared with random selection of communication peers? (keeping the number of communication peers the same)?\", \"What are the limitation of this method? Based on the results in Figure 4, the benefit of the method is more obvious for AdversarialPursuit than for Battle. It seems it's only learning after for the Battle environments. Is it learning faster for Battle because there is less communication peers for each node?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
CKx7eOYFG8
Exemplar-free Continual Representation Learning with Symmetric Distillation
[ "Thomas Paul Alexandre Wiggers", "Tejaswi Kasarla", "Melika Ayoughi", "Paul Groth", "Pascal Mettes" ]
Continual learning strives to train a model in a sequential manner by learning from new tasks while retaining information about old tasks. Treating this as a common classification problem leads to catastrophic forgetting, especially in deep learning settings, where knowledge of old tasks is forgotten as soon as a model is optimized on new tasks. Existing solutions tackle this problem by imposing strict assumptions, such as the availability of exemplars from previously seen classes or a warm start of a model on many classes before starting the continual learning. While effective on known benchmarks, such assumptions can be impractical and do not directly address the stability-plasticity dilemma in continual learning. In this paper, we follow a recent push in the field to tackle continual learning in the exemplar-free cold-start setting. We propose Model-in-the-Middle (MITM). The idea behind MITM is to separate the learning of new classes and retention of past class knowledge by using two distinct models. We propose a learner with symmetric distillation from both models, enabling us to learn evolving representations as new tasks arrive. We show that explicitly separating and balancing old and new tasks through symmetric distillation helps absorb large distribution shifts in between tasks, mitigating the stability gap. Our approach is simple yet outperforms the state-of-the-art in the challenging exemplar-free cold-start continual learning setting.
[ "continual learning", "class-incremental learning" ]
https://openreview.net/pdf?id=CKx7eOYFG8
https://openreview.net/forum?id=CKx7eOYFG8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yvikBTUia4", "vL1nubdMTF", "jfccDhMxEB", "Xct0T4FJ70", "EmjjyjWCUM", "58eKVmt1Tg" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1729080072059, 1731506710462, 1730636180737, 1730189062103, 1731506254139, 1730629330903 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6544/Reviewer_GTZZ" ], [ "ICLR.cc/2025/Conference/Submission6544/Authors" ], [ "ICLR.cc/2025/Conference/Submission6544/Reviewer_mrJX" ], [ "ICLR.cc/2025/Conference/Submission6544/Reviewer_R7Zj" ], [ "ICLR.cc/2025/Conference/Submission6544/Authors" ], [ "ICLR.cc/2025/Conference/Submission6544/Reviewer_r4ct" ] ], "structured_content_str": [ "{\"summary\": \"The authors address the problem of exemplar-free class-incremental learning in the challenging cold-start setting, where classes are equally distributed among tasks, and no initial training is performed on a large first task. This contrasts with the less challenging warm-start setting, where the backbone exploits a well-trained representation before incremental learning.\\n\\nThe authors propose the Model-in-the-Middle (MITM) approach, which aims to separate the learning of new classes from the retention of past class knowledge. Specifically, for each task, two models are trained (leading and middle models), and a frozen model from the previous task (trailing model) is used for distillation. The leading model is trained with a cross-entropy loss applied only to the logits of the current task's classes, without any regularization term, and aims to learn the feature representation for the current task. The middle model is trained with a symmetric distillation loss, which consists of two logit distillation losses applying a mean square error to match target logits: the first logit distillation term matches the logits of the middle model on the current task's classes with the logits of the leading model for the same classes; the second logit distillation term matches the logits of the middle model on the previous task's classes with the trailing model, which is a frozen copy of the middle model before training.\\n\\nAt the end of training, the symmetric distillation loss balances the middle model's representations, enabling classification across all tasks. The authors evaluate the proposed approach using multiple metrics. Beyond standard metrics in continual learning (CL)\\u2014such as final average accuracy, average incremental accuracy, forgetting, and minimum accuracy\\u2014the authors introduce a new metric called Final Accuracy Standard Deviation to provide insights into task-recency bias. The proposed method demonstrates state-of-the-art performance on S-CIFAR-100 (10-step, 20-step) and S-Tiny-ImageNet (10-step) across the four metrics, without relying on any prototype reharsal approach, which is a common strategy in the literature.\\n\\nRegarding their analysis, the authors show that employing separate softmax layers for current and previous task classes mitigates the stability gap in the offline CL setup, confirming the results found by Caccia et al. (2022) for online CL. They also ablate the proposed approach against logit distillation with separated softmax layers. Finally, they discuss the stability-plasticity trade-off, highlighting that the method is robust with respect to hyperparameter selection in the leading model's cross-entropy loss.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written. The authors effectively contextualize the problem they aim to solve, starting with the introduction and continuing through the related work section. The methodology section clearly outlines the challenges and how they are addressed. I appreciate Figure 1, which, in its simplicity, helps the reader follow the methodology easily. The experimental section is also clearly presented.\", \"The proposed method, Model-in-the-middle (MITM), introduces a simple and effective mechanism to reduce task recency bias and improve average accuracy by employing the proposed symmetric logit distillation loss on the middle model and cross entropy loss on the leading one, as demonstrated in Table 4 of the ablation study.\", \"Compared to previous exemplar-free approaches, the authors' original contribution lies in training the leading model to learn the current task classes, while the middle model balances the representation without requiring any prototypes. This approach avoids assumptions about the feature distribution that are typically necessary for prototype rehearsal, as well as the need for strategies to compensate for drift.\", \"I find the proposed metric for task-recency bias (Final Accuracy Standard Deviation) particularly interesting, as it enables a better analysis of the impact of task-recency bias on final performance, alongside the analysis of stability gap in the offline incremental setting.\"], \"weaknesses\": \"**Methodology Weaknesses**\\n\\nOverall, I believe that the methodology requires additional theoretical or empirical investigation to further strengthen the novel contributions of the paper. Below, I summarize my major concerns:\\n\\n- While the use of separate softmax layers and logit distillation on past logits adds value to the approach, these techniques are not entirely novel. Similar strategies have already been explored in offline exemplar-based methods such as X-DER[a], as well as in online incremental learning Caccia et al[b]. Additionally, the concept of distillation using both past and current logits is well-established in X-DER, though in the context of methods with exemplars rather than exemplar-free settings. The current approach demonstrates that logit distillation improves performance even in exemplar-free scenarios, but it remains unclear why it is effective in this context. A more thorough empirical investigation into why logit distillation works under such conditions would greatly enhance the contribution of the paper, which otherwise remains limited to the introduction of the MITM model.\\n\\n- The authors mention that they allow the trailing model to update the batch normalization statistics to account for distribution shift across tasks, but no empirical evidence of the resulting performance improvement is provided. Are the batch normalization statistics also updated for the middle and leading models?\\n\\n**Experiments Weaknesses**\\n\\nOverall, I believe the experimental section is weak, and there are several ambiguities that make it difficult to fully understand the performance of the proposed method. Additionally, the authors do not address the limitations of their approach. Below, I summarize my major concerns:\\n\\n- The authors mention that they train a slimmed ResNet-18 model for their experiments (Lopez-Paz & Ranzato, 2017 [c]). This model has three times fewer feature maps across all layers compared to the full ResNet-18, resulting in approximately 1 million parameters and feature space size equals to 160 (compared to about 11 million parameters and feature space size equals to 512 in the full version). To the best of my knowledge, all competitors (FeTriL[d], EFC[e], LwF+LDC[f] ) use the full ResNet-18. Since the results for these competitors in the tables are excerpted from EFC or LwF+LDC, the authors should align all performance results using the full ResNet-18 under equal conditions to accurately assess how much better the proposed approach is. This applies to both the stability gap plot and the per-task accuracy as well.\\n\\n- In Figure 2 (top left), it is unclear why the accuracy on Task 1 for EFC before the training of Task 2 is significantly lower compared to LwF-SS and MITM, even though incremental learning has not yet been performed. This suggests that the starting points of the three approaches may differ, which could be related to the point I previously mentioned. Accuracy plots across the incremental learning steps would be necessary to clarify this. Additionally, the accuracy values in the stability plot are inconsistent when compared to Figure 3. \\n\\n- The experimental section is limited. The authors only evaluate their approach on S-CIFAR100 (10 and 20 steps) and Tiny-ImageNet (10 steps), without conducting any evaluation on the ImageNet-Subset (a subset of ImageNet-1K, with the same resolution but only 100 classes). Additionally, a complete evaluation with 20 steps would be necessary (on both Tiny-Imagenet and Imagenet-Subset). These benchmarks are standard and widely used [d][e][f][g], making them essential for understanding robustness of the method and comparisons in future works.\\n\\n- The authors do not discuss the limitations of their approach. For instance, although the introduction of the middle model is interesting, it also introduces a computational training burden. While prototype-based approaches may be less effective, they do not significantly increase training resource requirements, as they only require training a single backbone per task. In contrast, the MITM approach requires training two backbones per task. How much does the computational burden increase compared to other approaches? Moreover, how does it compare to joint training on a single model? \\n\\n\\n[a] Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, and Eugene Belilovsky. New insights on reducing abrupt representation change in online continual learning. In ICLR, 2022\\n\\n[b] Matteo Boschini, Lorenzo Bonicelli, Pietro Buzzega, Angelo Porrello, and Simone Calderara. Class-\\nincremental continual learning into the extended der-verse. TPAMI, 2022.\\n\\n[c] David Lopez-Paz and Marc\\u2019Aurelio Ranzato. Gradient episodic memory for continual learning.\\nNeurIPS, 2017\\n\\n[d] Gregoire Petit, Adrian Popescu, Hugo Schindler, David Picard, and Bertrand Delezoide. Fetril:\\nFeature translation for exemplar-free class-incremental learning. In WACV, 2023\\n\\n[e] Simone Magistri, Tomaso Trinci, Albin Soutif, Joost van de Weijer, and Andrew D Bagdanov. Elas-\\ntic feature consolidation for cold start exemplar-free incremental learning. In ICLR, 2024\\n\\n[f] Alex Gomez-Villa, Dipam Goswami, Kai Wang, Andrew D Bagdanov, Bartlomiej Twardowski, and\\nJoost van de Weijer. Exemplar-free continual representation learning via learnable drift compen-\\nsation. In ECCV 2024\\n\\n[g] Fei Zhu, Xu-Yao Zhang, Chuang Wang, Fei Yin, and Cheng-Lin Liu. Prototype augmentation and\\nself-supervision for incremental learning. In CVPR, 2021\", \"questions\": \"From my perspective, the paper is well-written, and the methodology offers some novel contributions, making the proposed method appealing.\\n\\nRegarding the methodology's weaknesses, while the introduction of the MITM model for class incremental learning is novel, further empirical analysis explaining why symmetric logit distillation acts as an effective regularizer to mitigate forgetting and reduce inter-task confusion [1] in the exemplar-free setting could enhance its novelty. In exemplar-based settings (e.g., X-DER) or exemplar-free settings with prototypes (e.g., LwF+LDC), the final classifier is trained to distinguish classes across tasks using exemplars or prototypes, making the mitigation of inter-task confusion in these scenarios more intuitive. Here, however, it is less clear how the proposed approach mitigates inter-task confusion, as the classifier is never explicitly trained to distinguish samples from different classes. Regarding representation forgetting [2], an analysis using linear probing could help to assess the separability of features, particularly in comparison with LwF+LDC, thereby providing insights into representation forgetting.\\n\\nFinally, a clearer discussion on batch normalization and what is its impact in the improvement is required.\\n\\nAs for the experimental weaknesses, I have major concerns. The experimental section needs clarification, as I mentioned earlier. It is unclear whether all comparisons employ the same architecture for a fair evaluation. Furthermore, only a few experiments on multiple benchmarks are performed. While I am not asking for a large-scale evaluation on ImageNet-1K, a limited set of benchmarks with reasonable computational requirements is necessary for understanding robustness of the method and future comparisons. Finally, the authors should discuss the limitations of their approach, as providing a thorough overview is equally important.\\n\\nOverall, given the above considerations, I rate the paper as marginally below the acceptance threshold, leaning towards rejection. In its current state, I do not consider it ready for acceptance. I am open to adjusting my score, either increasing or decreasing it, based on the rebuttal phase. \\n\\nAdditional Questions (not relevant to the score): Is the leading model trained from scratch for each task, or is it a copy of the trailing model prior to the current task training? This detail seems to be missing from the paper and could be worth including\\n\\n[1] Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost van de Weije, Class-incremental learning: survey and performance evaluation (TPAMI 2022)\\n\\n[2] MohammadReza Davari, Nader Asadi, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky, Probing Representation Forgetting in Supervised and Unsupervised Continual Learning (CVPR 2022)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We want to thank the reviewers for their time and valuable feedback on the paper. We appreciate the positive comments on the writing, results, and clarity of the method. While we believe that we can address all comments, this effort would extend beyond the timeframe of the rebuttal. We therefore will withdraw the paper and we will focus on improving the paper based on the comments from the reviewers for a future re-submission. Thank you again for your input, it will help us make a better version of the paper.\"}", "{\"summary\": \"This work presents a new method for exemplar-free class-incremental learning (EFCIL) called Model-in-the-Middle (MITM). The study focuses on the cold-start scenario, where the trade-off between plasticity and stability is more pronounced than in typical incremental learning settings. MITM proposes using symmetric distillation across three models: a leading (plastic) model, a trailing (stable) model, and a middle (student/learner) model. The authors demonstrate empirical results on equally split CIFAR-100 and Tiny-ImageNet datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. Tackling one of the hardest problem of CL - EFCIL in a cold-start scenario.\\n2. The paper is easy to follow, with a good explanation of the used evaluation metrics.\", \"weaknesses\": \"1. Such method of double distillation from fully plastic and second stable model is not knew and it was presented in previous works, i.e., [1], [2]. [1] presents far more results and evaluation on as well CIFAR-100 and TinyImageNet. But this work does not compare to it (even to LwF+ANCL).\\n\\nThe second work [2] extend it even a bit, proposing that the \\\"copy\\\" operation can be exchanged to distillation and stable model can be different than the plastic one (heterogenous architectures), however, here for the unsupervised continual representation learning. \\n\\n2. Choice of FeTrIL for cold start scenario in EFCIL is questionable. Providing one result for DMC in Tab.1 for comparison as well. \\n\\n3. Some not clear statements in the text, e.g. line 320-321: _our method optimizes all three models jointly on the current task data._ I believe that is not true, because the trailing model is frozen - see Fig. 1 (BN are only updated to update a teacher). This can be confusing for the reader. \\n\\n4. Related work: positioning methods like LwF, EWC as a warm start methods. More focus on warm-start than cold-start related work, and selection of the methods.\\n\\n5. Using only two, small-size datasets and a single model architecture.\\n\\nOverall, I find this work a very simple research with an already existing idea, limited experimentation section and non-insightful analysis of the results (ok, the bias is presented nicely), with a strange selection of the method for comparison. I think that this work need a bit of improvement and more experiments for bringing something new and be a good publication.\\n\\n[1] Kim, Sanghwan, et al. \\\"Achieving a better stability-plasticity trade-off via auxiliary networks in continual learning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n[2] Gomez-Villa, Alex, et al. \\\"Plasticity-optimized complementary networks for unsupervised continual learning.\\\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024.\", \"questions\": \"1. Why work do not present the results on the bigger datasets, e.g., ImageNet-100?\\n2. Why not using lambda=0.8 for the all results if that presents the better outcome for FAA, AIA, and Forgetting than 1.0? \\n3. Why in the abstract not state that is class-incremental learning?\\n4. Can you elaborate/support this claim? (l.61-62:) _Moreover, exemplar-based solutions do not generalize to vision-language domains, as keeping exemplars of all possible textual descriptions is not tractable_\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an exemplar-free continual representation learning method with Symmetric Distillation to address catastrophic forgetting in continual learning. The method introduces a Model-in-the-Middle (MITM) architecture that divides the network into leading, middle, and trailing models. The leading model focuses on learning new tasks, the trailing model preserves knowledge from previous tasks, and the middle model distills knowledge from both. This design aims to balance stability and plasticity effectively, reducing task-recency bias. Experimental results show that this approach outperforms existing methods on several benchmarks in exemplar-free, cold-start continual learning settings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.Divide the network to be optimized into leading, middle, and trailing versions, allowing the network to learn new knowledge while preserving previous knowledge, thus effectively balancing stability and plasticity\\n2.A new metric Final Accuracy standard deviation (FA\\u03c3) is introduced to measure the degree of drift in the model's representation ability for old tasks during continual learning.\\n3.The paper presents extensive experiments on multiple benchmark datasets, demonstrating the method's superiority, especially under the stringent exemplar-free, cold-start setting.\", \"weaknesses\": \"1.Related work section is overly broad and insufficiently detailed, and it does not provide a brief overview of the state-of-the-art methods used for comparison later, such as EFC [1].\\n2.It remains to be clarified what challenges are encountered by cutting-edge continual learning research in the exemplar-free, cold-start setting, and which directions current research is focused on.\\n3.The MITM method employs symmetric distillation; however, the loss function is calculated using logits without aligning feature extraction with the trailing model, which limits the interpretability of this approach. Moreover, as the number of tasks increases, this may lead to a drift in the model\\u2019s feature representation capability, potentially explaining the performance decline in the exemplar-based setting.\\n4.There is a lack of discussion on parameter count and computational overhead. The MITM architecture requires maintaining three models simultaneously, which incurs additional computational and memory burden. This paper does not clarify the method\\u2019s performance in terms of computational efficiency and memory usage.\\n\\n[1] Simone Magistri, Tomaso Trinci, Albin Soutif, Joost van de Weijer, and Andrew D Bagdanov. Elastic feature consolidation for cold start exemplar-free incremental learning. In ICLR, 2024.\", \"questions\": [\"In Equation 8, only one hyperparameter \\u03bb is introduced. Would it be possible to introduce another hyperparameter for L_trialing?\", \"Table 6 shows the performance of MITM in the exemplar-based cold-start setting, with an unexpected decline across various metrics. This result seems somewhat counterintuitive, and further experiments are needed to explain the underlying reasons.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We want to thank the reviewers for their time and valuable feedback on the paper. We appreciate the positive comments on the writing, results, and clarity of the method. While we believe that we can address all comments, this effort would extend beyond the timeframe of the rebuttal. We therefore will withdraw the paper and we will focus on improving the paper based on the comments from the reviewers for a future re-submission. Thank you again for your input, it will help us make a better version of the paper.\"}", "{\"summary\": \"The paper propose a continual learning method which does not depend on prototypes, exemplars or pre-training stage. The authors propose to separately the representations of old and new tasks using different models and then propose to distill their knowledge to a middle model to ensure a better stability-plasticity tradeoff. The proposed method outperforms existing methods across 2 datasets on cold-start CL settings with a reduced task-recency bias.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written, organized and easy to understand.\\n2. The paper proposed a very simple training strategy of symmetric distillation with 3 models.\\n3. The Final Accuracy standard deviation metric is quite helpful for evaluation.\\n4. The discussion of stability gap is appreciated.\", \"weaknesses\": \"1. References and discussion in comparison to similar CL methods using auxiliary networks [1] or learning with multiple models [2] are missing.\\n\\n2. Poor experimental section: It is a standard practice to evaluate the method on a large resolution dataset like Imagenet-100 as done in all recent works like [3,4], while this paper provide experiments only on small resolution datasets like CIFAR100 and TinyImageNet. Recently proposed method - Adversarial Drift Compensation [3] also used the challenging cold-start setting and should be included in the comparison.\\n\\n3. It is not clear how the proposed method works in warm-start settings. While it\\u2019s good to evaluate in challenging cold start settings, does the method still work in warm-start settings? \\n\\n4. While warm-start evaluation can be a bit biased due to large number of base classes, a more realistic setting is using Pre-trained ViT models following recent works [5]. It would be interesting to see how MITM works with pre-trained models.\\n\\n[1] Kim, Sanghwan, et al. \\\"Achieving a better stability-plasticity trade-off via auxiliary networks in continual learning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[2] Arani, Elahe, Fahad Sarfraz, and Bahram Zonooz. \\\"Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System.\\\" International Conference on Learning Representations, 2022.\\n\\n[3] Goswami, Dipam, et al. \\\"Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[4] Magistri, Simone, et al. \\\"Elastic Feature Consolidation For Cold Start Exemplar-Free Incremental Learning.\\\" The Twelfth International Conference on Learning Representations.\\n\\n[5] G. Zhang, L. Wang, G. Kang, L. Chen, and Y. Wei. Slca: Slow learner with classifier alignment for continual learning on a pre-trained model. In ICCV, 2023.\", \"questions\": \"It is not clear if all the compared methods used the same network \\u201cslimmed ResNet18\\u201d. This should be clarified.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
CKqiQosLKc
Sampling from Energy-based Policies using Diffusion
[ "Vineet Jain", "Tara Akhound-Sadegh", "Siamak Ravanbakhsh" ]
Energy-based policies offer a flexible framework for modeling complex, multimodal behaviors in reinforcement learning (RL). In maximum entropy RL, the optimal policy is a Boltzmann distribution derived from the soft Q-function, but direct sampling from this distribution in continuous action spaces is computationally intractable. As a result, existing methods typically use simpler parametric distributions, like Gaussians, for policy representation — limiting their ability to capture the full complexity of multimodal action distributions. In this paper, we introduce a diffusion-based approach for sampling from energy-based policies, where the negative Q-function defines the energy function. Based on this approach, we propose an actor-critic method called Diffusion Q-Sampling (DQS) that enables more expressive policy representations, allowing stable learning in diverse environments. We show that our approach enhances exploration and captures multimodal behavior in continuous control tasks, addressing key limitations of existing methods.
[ "Reinforcement learning", "Diffusion models" ]
https://openreview.net/pdf?id=CKqiQosLKc
https://openreview.net/forum?id=CKqiQosLKc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wycB76LUw2", "dnf53H0bJY", "dNDQIsNxWK", "ctCqx6Z1hy", "aKhSuBX8Ir" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731523755377, 1729767255615, 1729782580023, 1730889021868, 1730527187752 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14109/Authors" ], [ "ICLR.cc/2025/Conference/Submission14109/Reviewer_hBVq" ], [ "ICLR.cc/2025/Conference/Submission14109/Reviewer_fgmh" ], [ "ICLR.cc/2025/Conference/Submission14109/Reviewer_zuam" ], [ "ICLR.cc/2025/Conference/Submission14109/Reviewer_auFv" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their feedback, particularly reviewer hBVq for the detailed comments. We will incorporate the suggestions and revise our work.\"}", "{\"summary\": \"The authors introduce a new algorithm for continuous RL environments, Diffusion Q-Sampling (DQS).\\nDQS makes use of an existing method, iterated Denoising Energy matching (iDEM). \\nThe key idea is to use iDEM to learn a score function which can be used in a reverse diffusion process to sample actions. \\nThe score function is trained such that the reverse diffusion process approximately samples from a Boltzmann distribution with respect to the Q-function of the current policy. \\n\\nThe authors give two theoretical results, corresponding to policy improvement and policy iteration respectively, to justify their choice of training rule for the action-value function and the diffusion model. \\n\\nThe authors then give experimental results for their method, DQS. \\nIn the first set of results, they compare DQS to SAC (soft actor-critic) and QSM (Q-score matching) in terms of the diversity of behaviors learned. They demonstrate that, in a goal reaching maze environment, DQS can successfully learn a diverse set of solutions, while SAC and QSM learn a more concentrated set of solutions. \\nIn the second set of results, they compare DQS to SAC and QSM on 8 tasks from the DeepMind control suite. They demonstrate that on many of these tasks, QSM dominates the other methods.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Originality - The application of iDEM is (to this reviewer's knowledge) novel; although other methods seek to use diffusion model policies, they typically use other methods for fitting the diffusion model. The application of iDEM is novel.\\n\\nQuality - The empirical results given are strong. The first set of results demonstrates well that DQS can indeed learn a policy which has support on multiple different solution types for problems. The second set of results shows that DQS can learn well, and outperform baseline methods in terms of sample efficiency. \\n\\nClarity - In general, the authors writing is clear. The method is well-explained, and seems reproducible. \\n\\nSignificance - The authors propose an effective new algorithm for continuous control. This algorithm seems particularly useful for the setting where compute is not a bottleneck, and multimodal policies are explicitly desired.\", \"weaknesses\": \"046 - The authors give methods of policy representations in the continuous setting. I would suggest that they mention SQL, which allows for the training of expressive policies which come from neither noise injection nor parametric family. These are trained via Stein-variational gradient descent.\\n\\n071 - The claim is made that \\\"[Diffusion models] have been extensively applied to solve sequential decision-making tasks, especially in offline settings where they can model multimodal datasets from suboptimal policies or diverse human demonstrations.\\\" No citations are given for these techniques - please include citations to the literature to which you are referring. \\n\\n191 - I would encourage the authors to say more about the role of the reverse SDE (3) in generation. Specifically, please be clear about how (3) is used to generate samples, rather than assuming this knowledge on the part of the reader. \\n\\n205 - Missing tildes over the x's in the expectations in Eq. (4). \\n\\n210 - Subscript below the S in equation (5) should be a capital K. \\n\\n260 - Lemma 1 is false, and its proof is invalid. Lemma 1 states that, for any action-value function, the policy which is Boltzmann with respect to that action-value function has a dominating action-value function. This statement is incorrect, and obviously so. Let $\\\\pi^*$ be the optimal policy, with action-value function $Q^*$. Then we know that $Q^*$ satisfies the Bellman optimality operator, $T^* Q(s,a) = r + \\\\gamma \\\\mathbb{E}[ \\\\max_{a'} Q(s',a') | s, a]$, where the expectation is taken over next-states $s'$ conditional on state-action pair $s,a$. If Lemma 1 were true, it would mean that the Boltzmann policy $\\\\pi_B$ with respect to $Q^*$ has an action-value function which dominates $Q^*$. But note that $T^* Q^*(s,a) \\\\geq T^{\\\\pi_B} Q^*(s,a)$, which can be seen by expanding definitions and using the fact that the maximum over $a'$ dominates any expectation with respect to $a'$, except if that expectation only places mass on the argmax actions. From this it also follows that this inequality is strict somewhere provided $Q^*$ is non-uniform somewhere. But since $T^* Q^* = Q^*$, it follows that $Q^* \\\\geq T^{\\\\pi_B} Q^*$. But monotonicity of the Bellman operator, it follows that, for all $n$, $Q^* \\\\geq [T^{\\\\pi_B}]^n Q^*$. Taking limits as $n \\\\to \\\\infty$, we obtain that, $Q^* \\\\geq Q^{\\\\pi_B}$, with strict inequality somewhere provided $Q^*$ is not flat. This contradicts the stated result. \\n\\nWe now turn to the proof given in A.1, and examine the error of reasoning. In the first two lines of (10), the expectation of $\\\\log(\\\\pi_{new})$ is taken with respect to $\\\\pi_{new}$, and the expectation of $\\\\log(\\\\pi_{old})$ is taken with respect to $\\\\pi_{old}$. However, in the third line of (10), the expectation of both terms is taken with respect to $\\\\pi_{new}$. This allows the authors to express this term as a KL-divergence, a step critical to their proof. However, the term should instead be a difference of entropies, which in general is not non-negative (as the KL-divergence is). \\n\\n265 - The proof of Theorem 1 is invalid. The proof relies heavily on the same argument as in Lemma 1, which is faulty. \\n\\nIn general, it seems like the authors fail to appreciate that results from the entropy regularised setting and the classical setting cannot be freely interchanged. The optimal policy is Boltzmann only if an entropy regularisation term is included in the Bellman backup, (7). When there is no such entropy term in the backup, the optimal policy will simply be the classical optimal policy, which in general is deterministic (or has support only on argmax actions). Similarly, the Boltzmann improvement map only gives improvement with entropy regularisation. Otherwise it can result in a strictly worse policy, as explained above. \\n\\nI would suggest that the authors either cut their theoretical results entirely, or think about replacing the Bellman backup in (7) with the entropy regularized backup - however this would result in a substantial change to the algorithm, which may be too late at this stage.\", \"questions\": \"074, 122 - The claim is made twice that for Q-Score Matching (Psenka et al. 2023), \\\"the exact form of the policy is unspecified and it is unknown what distribution the diffusion models sample from\\\". But this is equally true for your method. Both DQS and QSM train score functions which are used to generate samples of the policy. And both DQS and QSM aim for these score functions to allow for sampling from the Boltzmann distribution with respect to the current action-value function. So it is unclear what this comment is meant to mean, or what advantage you are supposing DQS has over QSM. Can the authors please clarify this?\\n\\n461 - You mention that diffusion based policies have an increased runtime compute requirement compared to parametric policy methods. Can you give an indication of the ratio of runtime for your method vs. SAC? Are there experiments you can run which demonstrate that DQS outperforms SAC when normalised for compute time? \\n\\nWill a codebase be made available to accompany the paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a diffusion-based sampling method that uses a negative Q-function as an energy function for sampling, thus allowing for more expressive policy representations. Based on this approach, an actor-critic method called **Diffusion Q-Sampling (DQS)** is proposed that enables stable learning in diverse environments. Experiments show that the method enhances exploration in continuous control tasks and effectively captures multimodal behaviours, overcoming key limitations of existing methods. However, the core sampling method used in this paper is iDEM leading to a lack of innovation, and the experimental results are insufficient, the multimodal experiments may be problematic (results of the Q-score method), and the baseline algorithm is too few and too simple.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This article proposes sampling with a diffusion strategy obeying a Boltzmann distribution to balance exploration and exploitation, focusing on a very cutting-edge area;\", \"This paper does a multimodal experiment to show that DQS has some multimodality, a point that may be of interest to the RL community;\", \"The writing of the paper is easy to follow.\"], \"weaknesses\": [\"The related work is not presented carefully enough, and some are only cited. In particular, the related work under Online diffusion is particularly scarce, and each needs the author to summarise their approach, and where the flaws lie. In addition, **diffusion & online RL** related work also need you to expand, I found a recent paper accepted in NeurIPS24 is also under this setting Diffusion Actor-Critic with Entropy Regulator (Wang et al.).\", \"You mention that the Q-score method does not have an exact distribution, but isn't Eq. (21) of the original paper a Boltzmann distribution? Is the representation in your paper not quite correct. It's better to clarify your statement about the Q-score method and explain how it relates to Eq. (21) in the original paper.\", \"The two proofs in 4.1 about policy improvement and policy iteration do not depend on the diffusion model, this is essentially a mathematical proof of a policy obeying a Boltzmann distribution. May I ask what is the essential difference between your proofs and the one in the Soft Actor-Critic Algorithms and Applications (Haarnoja et al.) paper?\", \"With the experiments in 5.1, I remain sceptical about the results of QSM. I think with the addition of some tricks to fully learn the bias of Q with respect to a, the QSM can get the same results as you did (e.g., do some random sampling to update the bias of Q with respect to a to get it to school in full action space).\", \"5.2 There is too little BASELINE for experimental comparisons. To prove your excellent performance, add Proximal Policy Optimisation Algorithms (Schulman et al.), Diffusion Actor-Critic with Entropy Regulator (Wang et al.), Policy Representation via Diffusion Probability Model for Reinforcement Learning (Yang et al.). At least a few difficult scenarios are tested on MuJoCo environment (Humanoid, Ant) and compared with the above algorithms.\", \"### Minor note.\", \"Please label all formulas in PRIMARY with the serial number, then you look at the expression for the Q function, where does the discount factor go?\", \"Equation (4) has incorrect parentheses.\"], \"questions\": \"See suggestions and questions in the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors have developed a new actor-critic algorithm called Diffusion Q-Sampling (DQS), which uses a diffusion-based model to sample from energy-based policies in actor-critic framework. The goal is to address current limitation of capturing complexity of multimodal action distributions in continuous action spaces. This novel algorithm is shown to be very effective for learning multimodal behaviors and improved sample efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The novel approach is able to learn multimodal actions which is valuable especial when multiple optimal trajectory exists.\\n2. By explicitly sampling from the Boltzmann distribution of the Q function, DQS is shown better abilities for balancing exploration and exploitation.\\n3. Through experiments on maze tasks and Deepmind control suites benchmarks, results have confirmed the advantages of DQS.\", \"weaknesses\": \"1. As pointed out by the authors, temperature of DQS needs to be manually tuned unlike SAC as it would be computationally very expensive to compute the likelihoods under diffusion model.\\n2. No ablation study. Maybe beneficial to have some ablation studies, for example, how sensitive DQS is to different temperature values, K (number of monte carlo samples and how is it relates to computation cost)? or isolate the contribution of techniques introduced, etc.\", \"questions\": \"For benchmark environments where DQS does not show clear advantages, is there any analysis/explanation?\\nI think ablation study would be useful. Any justifications why u choose not to have ablations?\\nI believe DQS is sample efficient as they perform better quicker at the early stage of training for some of the environments, I'm curious how would DQS perform at the late stage of training? Have you ever run the algorithm for longer training iterations (for example, 1m iterations)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a novel framework for sequential decision-making using diffusion models for\\nsampling from energy-based policies and a new actor-critic algorithm for training diffusion\\npolicies based on that framework.This algorithm improves the high-cost issue of sampling from\\ncontinuous action spaces in traditional maximum entropy reinforcement learning methods. It has\\nbeen validated in the authors' custom maze navigation and DeepMind Control Suite tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Proposing a novel Boltzmann policy iteration which is more efficiency and still bound to recover the optical policy\", \"weaknesses\": \"Lack of novelty\\uff1aSimply integrating Diffusion into the traditional SAC which lacks innovation.\\n\\nBenchmark in a custom environment lacks persuasiveness and the test is not quantified to data.\", \"questions\": \"How does the method compare to recent Diffusion RL algorithms that outperform QSM,such as\\n\\n1.Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization,https://arxiv.org/abs/2405.16173\\n\\n2.Policy Representation via Diffusion Probability Model for Reinforcement Learning,https://arxiv.org/abs/2305.13122\\n\\nCan the algorithm demonstrate its advantages in a broader range of test environments, rather\\nthan just in a custom maze?\\n\\nCan the experiments be quantified into numbers or tables rather than presenting the results\\nusing abstract images?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
CKdlPUWDEE
ME-Switch: A Memory-Efficient Expert Switching Framework for Large Language Models
[ "Jing Liu", "Ruihao Gong", "Mingyang Zhang", "Yefei He", "Jianfei Cai", "Bohan Zhuang" ]
The typical process for LLM’s development involves pre-training a general foundation model on massive data, followed by fine-tuning on task-specific data to obtain a series of specialized experts. Serving these experts can pose significant memory challenges, as loading all experts onto devices is impractical, and frequent switching between experts in response to user requests can incur substantial I/O costs. Previous approaches decompose the expert weights as the pre-trained weights plus delta weights, followed by quantizing the delta weights using output channel-wise step sizes to reduce the model size. However, these methods overlook the fact that certain input channels of delta weights can cause significant quantization errors at extremely low bitwidths. To this end, we introduce ME-Switch, a memory-efficient expert switching framework tailored for serving multiple LLMs. To condense the number of bits required for describing the delta weights, we propose a salient-aware delta compression method that first identifies which input channels of delta weights are salient based on reconstruction error and then employs mixed-precision quantization that selectively quantizes non-salient input channels of delta weights to extremely low bits while keeping the salient ones intact, significantly reducing storage demand while maintaining performance. Extensive experiments show the promising memory efficiency and accuracy of ME-Switch. For example, when serving three models from the Mistral-7B family, ME-Switch reduces the model size by 2.04$\times$ and maintains nearly lossless performance on instruction, mathematical reasoning, and code generation tasks. Furthermore, our method can efficiently serve 16 Mistral-7B models on an NVIDIA A100 GPU.
[ "Large Language Model", "Memory Efficient Compression" ]
Reject
https://openreview.net/pdf?id=CKdlPUWDEE
https://openreview.net/forum?id=CKdlPUWDEE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tdSobiBLrK", "orr2WNG5ru", "opB53z1rWL", "l3GQHKs3Xz", "jftnO8vc96", "jXPi5DjwtZ", "dirOar3QtM", "b7AyIS7PWk", "WoWd6BO2An", "V6e43vhDiB", "S3Kt7aaZLw", "OkD1YkVfAb", "NJEaRfxFn4", "MqVP2UFHop", "MQXtF5ttIJ", "M9b8rsrejV", "LJptftQfQU", "FqNZT1V8eL", "DP6e5pC8eT", "C7shkhyBql", "AUPkASPmfm", "9gpM4jkCvq", "5SfJ49YsmJ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732581425916, 1732709063389, 1732105076018, 1732413638720, 1732104910715, 1732104663560, 1732277534551, 1729646752789, 1737523422703, 1732104480037, 1732103722454, 1730692028060, 1734709169185, 1732104191338, 1732105013923, 1730626071735, 1732582615754, 1730768613545, 1732104430009, 1732413690276, 1732103972768, 1732320766947, 1732709182835 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission912/Reviewer_Ldho" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Reviewer_jYU9" ], [ "ICLR.cc/2025/Conference/Submission912/Reviewer_Ldho" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Reviewer_qVf8" ], [ "ICLR.cc/2025/Conference/Submission912/Area_Chair_yksc" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Reviewer_jYU9" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Reviewer_1fKA" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ], [ "ICLR.cc/2025/Conference/Submission912/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I appreciate the authors\\u2019 detailed response. The updated explanation regarding the cost of I/O, dequantization, and multiplication is helpful for readers to understand how latency changes as the number of models increases and the contribution of each component.\\n\\nThe author emphasis the novelty of the paper is the efficiency improvements compared to traditional MoE methods. Overall, this work has a good vision of the drawbacks of traditional methods' efficiency and their method does show efficiency improvements over traditional token-switch approaches, particularly in reducing storage requirements, training costs, and I/O latency for large language models. However, though the paper provides performance comparisons with MoE models across up to four distinct domains, further discussion would be valuable when comparing at larger scales in a more comprehensive level. I encourage the authors to keep expanding their performance analysis relative to previous methods on more domains and more models in following works.\\n\\nIn conclusion, I will keep my current rating.\"}", "{\"title\": \"Friendly Reminder: Approaching Discussion Deadline\", \"comment\": \"Dear Reviewer 1fKA,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper. As the discussion period is nearing its end, we wanted to kindly check if our responses have sufficiently addressed your concerns. If there are any remaining issues, we would be happy to clarify further.\\n\\nThank you again for your valuable feedback and time.\\n\\nBest regards,\\n\\nAuthors of #912\"}", "{\"title\": \"Response to Reviewer Ldho (Part 2)\", \"comment\": \"**Q5. The discussion of latency mentions that the naive implementation has a slight advantage when the number of models is fewer than 4, as shown in Figure 8, and the author attributes this to the quantization overhead. However, why does the quantization overhead stop being the dominant factor as the number of models increases? What is the trend in the profiling of quantization overhead as the number of models continues to increase? A deeper analysis of this would be valuable.**\\n\\n**A5.** Thank you for pointing this out. Our explanation in the main text was not entirely precise. Our method is slightly slower than the naive approach when the number of models is fewer than 4 due to the additional computational cost introduced by the second term $\\\\mathbf{x} \\\\hat{\\\\Delta}$ in Eq. (4). This includes the cost of I/O, dequantization, and multiplication. To illustrate this, we profile their latency percentage in Table V. We observe that the dequantization cost is very small across different model numbers. Initially, latency is dominated by I/O operations because LLM decoding is a memory-bound process when the batch size is small (Lin et al., 2024; Liu et al., 2023). However, as the number of models grows, compute-related operations, such as matrix multiplications, begin to dominate the overall latency. We have included these results and discussions in the revised manuscript.\\n\\nTable V. Decoding latency percentage (%) of different components for $\\\\mathbf{x} \\\\hat{\\\\Delta}$. (Testing)\\n\\n| Model number | 1 | 2 | 4 | 8 | 16 |\\n|:------------:|:--:|:--:|:--:|:--:|:--:|\\n| I/O | 87.83 | 90.23 | 58.73 | 35.30 | 22.91 |\\n| Dequantization | 1.74 | 0.87 | 0.15 | 0.09 | 0.05 |\\n| Multiplication | 10.43 | 8.90 | 41.12 | 64.61 | 77.04 |\"}", "{\"title\": \"Follow-up on Rebuttal\", \"comment\": \"Dear Reviewer 1fKA\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper. We have carefully addressed your concerns and provided detailed responses, which we hope have resolved your queries. If you have any additional questions or further concerns, please do not hesitate to let us know.\\n\\nBest regards,\\n\\nAuthors of #912\"}", "{\"title\": \"Response to Reviewer jYU9\", \"comment\": \"Thanks for your valuable comments.\\n\\n**Q1. How does the saliency-based delta quantization technique in this paper differ from other non-magnitude-based saliency quantization techniques like Slim-LLM [E]?**\\n\\n**A1.** Slim-LLM measures weight salience based on **the error induced by removing specific weights**, using the same metric as SparseGPT [F] (see Eq. (3) in [F]), which is primarily designed for pruning. In contrast, our method uses **reconstruction error (see Eq. (2)) to directly assess the impact of quantization on the model\\u2019s output**, making it more suitable for quantization tasks. To highlight the advantages of our saliency metric, we replaced it with the metric used in Slim-LLM and presented the results in Table IV. From the results, our saliency metric consistently outperforms Slim-LLM\\u2019s metric across various datasets and bitwidths. These results are included in Figure 4 and Table C of the revised manuscript.\\n\\nTable IV. Performance comparisons with different saliency metrics.\\n\\n| Domain | Dataset | BF16 | 1-bit | Slim-LLM | Ours |\\n|:--------------:|:---------:|:-----:|:-----:|:--------:|:-----:|\\n| Instruct (%) \\u2191 | MMLU | 63.43 | 63.14 | 63.26 | 63.43 |\\n| Math (%) \\u2191 | GSM8K | 73.92 | 53.45 | 57.16 | 59.14 |\\n| | Math | 20.62 | 1.50 | 2.00 | 1.74 |\\n| | Avg. | 47.27 | 27.48 | 29.58 | 30.44 |\\n| Code (%) \\u2191 | HumanEval | 51.20 | 47.00 | 48.20 | 47.60 |\\n| | MBPP | 60.40 | 58.40 | 58.40 | 60.70 |\\n| | Avg. | 55.80 | 52.70 | 53.30 | 54.15 |\\n| Domain | Dataset | BF16 | 1-bit | Slim-LLM | Ours |\\n| Instruct (%) \\u2191 | MMLU | 63.43 | 63.72 | 63.68 | 63.95 |\\n| Math (%) \\u2191 | GSM8K | 73.92 | 73.31 | 72.93 | 73.62 |\\n| | Math | 20.62 | 20.44 | 20.16 | 20.48 |\\n| | Avg. | 47.27 | 46.88 | 46.55 | 47.05 |\\n| Code (%) \\u2191 | HumanEval | 51.20 | 47.00 | 48.20 | 51.80 |\\n| | MBPP | 60.40 | 59.10 | 59.60 | 60.70 |\\n| | Avg. | 55.80 | 53.05 | 53.90 | 56.25 |\\n\\n\\n**Q2. Model routing is a well-studied problem! Are there other existing approaches that study model routing in the same scenario where the appropriate model is not known? How is your solution different or better than them?**\\n\\n**A2.** Please refer to Q1 of the general response. We have moved the model-level router section to the appendix to maintain focus on our primary contribution in the main text.\\n\\n**Q3. Related work like GPT-Zip is not cited**\\n\\n**A3.** Thank you for pointing this out. We have included a citation and discussion of GPT-Zip in the related work section of the revised manuscript.\\n\\n**Q4. On L46, the paper claims that no single model can master all tasks simultaneously. While this is not currently the state of affairs is there formal proof or evidence that this is impossible?**\\n\\n**A4.** Our statement on L46 is based on practical challenges. For further details, please refer to Q2 of the general response.\\n\\n**Q5. Will the code for this be open source?**\\n\\n**A5.** Yes, we will release the code upon acceptance. We have also provided the core pseudo codes of our Triton kernel in Q13 of Reviewer 1fKA.\\n\\n**Q6. Are there any insights on why mixed precision quantization outperforms the full precision models in certain settings?**\\n\\n**A6.** The performance improvements over individual expert models are primarily attributed to our additional training through efficient distillation, as discussed in L393-395. This process improves the models\\u2019 task-specific performance by optimizing the quantization step size. Similar phenomena are also observed in many quantization literature [G][H][I].\\n\\n**Q7. Is there an estimate of how often in practice the appropriate model is not known in advance? Is this more common than the case where it is known?**\\n\\n**A7.** In public-facing applications or open-ended systems, user inputs may vary widely in content and intent, often lacking clear contextual information. This makes it particularly challenging to determine the appropriate model for queries in advance. Although it is difficult to precisely quantify how often this occurs, such scenarios are common in many practical applications.\\n\\n**Reference:**\\n\\n[E] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models. arXiv 2024.\\n\\n[F] SparseGPT: Massive Language Models Can be Accurately Pruned in One-Shot. ICML 2023.\\n\\n[G] Learned step size quantization. In ICLR, 2020.\\n\\n[H] Learnable Companding Quantization for Accurate Low-bit Neural Networks. CVPR 2021.\\n\\n[I] Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.\"}", "{\"title\": \"Response to Reviewer qVf8\", \"comment\": \"Thanks for your constructive comments.\\n\\n**Q1. It would be great if both contributions can be evaluated more thoroughly against previous methods. There are a lot of previous works (especially for the router) tackling similar problems, it seems that the authors only compared with the most natural baseline, but not more advanced methods. It would be great if the authors can justify it or provide more results.**\\n\\n**A1.** For salient-aware delta compression, we have compared our method against various quantization techniques, as shown in Figure 4 and Table C of the initial submission. Additionally, during the rebuttal period, we conducted further evaluations using Slim-LLM\\u2019s saliency metric [E], with the results detailed in Q1 of Reviewer jYU9. These results consistently demonstrate that our method outperforms all compared approaches across different tasks and bitwidths.\\n\\nFor further discussion on prior work related to model-level routing, please refer to Q1 of the general response. \\n\\n**Q2. I am a little bit confused by how the router is related to the first contribution -- it would be great if the authors could elaborate more? These two parts look quite orthogonal to me.**\\n\\n**A2.** Please refer to Q1 of the general response. \\n\\n**Q3. The second contribution could be put better into context.**\\n\\n**A3.** Thank you for your constructive advice. To better focus the main contribution of our method, we have moved the contents w.r.t. model-level router to the appendix. For additional details, please refer to Q1 of the general response.\\n\\n**Reference:**\\n\\n[E] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models. arXiv 2024.\"}", "{\"comment\": \"Thanks to the authors for addressing all questions. I will keep my score.\"}", "{\"summary\": \"The author contributed a memory-efficient expert-switching framework for LLMs, called ME-Switch. The framework is designed to address the challenge of LLMs' storage efficiency when serving multiple large expert models, while maintaining performance through model-level router fine-tuning and efficient distillation fine-tuning.\\n\\nThe proposed method applies mixed-precision quantization for delta weights to preserve model performance and enhance efficiency. This is achieved by selecting non-salient input channels based on quantization errors. Additionally, the approach incorporates a model-level router, implemented through supervised fine-tuning of a relatively small LLM, to switch between expert domain-quantized LLMs according to the user's inquiry. A detailed ablation study is included to further validate the approach.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The author proposes an innovative and effective framework for improving the storage efficiency of serving multiple LLM experts.\", \"This framework leverages mixed-precision quantization for delta weights, intelligently selecting non-salient channels based on quantization errors. The strategy of maintaining performance by selecting the top-k salient channels as non-quantized has been proven effective, as the accuracy surpasses baseline models across multiple domains.\", \"The framework also introduces model-level routing by fine-tuning a small LLM router on domain-specific datasets, transforming the routing problem into a simpler domain classification task. This simplifies the problem and making the finetuning of small LLM router more lightweight compared to finetuning the MoE models\", \"The framework implements on-demand swapping, enhancing GPU memory efficiency by loading only the quantized delta weights onto the GPU. This optimization is particularly effective for switching between large LLM experts. Combined with model-level routing and a specialized routing kernel, this approach significantly improves decoding efficiency. As demonstrated in the paper, it enables hosting up to 16 models on a single GPU, whereas the naive approach encounters out-of-memory (OOM) issues after just 4 models.\", \"A detailed ablation study is presented to demonstrate the effectiveness of mixed-precision quantization compared to baseline approaches across multiple domains. These experiments significantly strengthen the framework's methods. For example, the approach preserves accuracy competitively when compared to fixed-precision and low-rank adaptation methods. Furthermore, the ablation study clearly illustrates the selection of parameters, such as the number of non-quantized channels (k) and quantization bits (b). Lastly, the performance comparison with other weight quantization methods like AWQ, Random, Wanda, and Magnitude supports the intuition that selecting non-salient input channels yields the best performance.\", \"The clarity of writing, including the structure, figures and tables, is good. The author explains the background context and methodology in a good manner.\", \"The paper also discusses other important aspects, such as latency, with a comparison to MoE.\", \"Model-level routing offers the advantage of requiring less intensive training compared to MoE.\", \"The author employs a Triton kernel to further reduce latency.\"], \"weaknesses\": [\"The model-level router relies heavily on domain classification, which requires fine-tuning the router model on domain-specific datasets. This approach may have several disadvantages:\", \"The author conducted research on at most four distinct domains for the router: mathematics, code, Chinese, and instruction. Although the router demonstrates strong classification ability for these domains, it would be worthwhile to explore its performance across more domains, particularly those with potential overlap. For example, Text Classification Experts and Sentiment Analysis Experts, or Mathematics and Physics Experts, Computer Science Experts and Data Science Experts. In such scenarios, MoE could leverage the combined knowledge of multiple experts, as it is fine-tuned based on a token-level router. A potential consequence is that, when adding a large number of experts to the MoE, fine-tuning may converge faster due to shared knowledge and parameters across domain experts. In contrast, a model-level router may struggle to distinguish between overlapping domain experts and always rely on a single expert, potentially causing overall performance degradation.\", \"This approach requires the construction of a domain classification dataset, which can be inconvenient. For instance, when adding a new domain expert, the router would need to be re-trained on an updated domain dataset.\", \"While the approach demonstrates effectiveness compared to other quantization methods in ablation studies, the experiment results do not include a comparison with an end-to-end token-level router approach, such as MoE baselines.\", \"Latency analysis: The comparison only covers up to 4 models, after which the naive implementation encounters out-of-memory (OOM) issues. It is a little subtle to conclude that latency is lower than naive model as the number of models is greater than 4.\"], \"questions\": [\"The author highlights the advantages of ME-Switch over MoE, such as reduced fine-tuning work and higher efficiency. However, it would be beneficial to include a more in-depth discussion of its limitations compared to MoE models. This would provide readers with a deeper understanding of the nature of model-level routing, particularly in addressing challenges like handling overlapping expert domains and managing a larger number of models when adapting to new experts, areas where MoE models may offer distinct advantages. Notably, the latency advantage of ME-Switch is fully realized when the number of models increases.\", \"Latency analysis: The discussion of latency mentions that the naive implementation has a slight advantage when the number of models is fewer than 4, as shown in Figure 8, and the author attributes this to the quantization overhead. However, why does the quantization overhead stop being the dominant factor as the number of models increases? What is the trend in the profiling of quantization overhead as the number of models continues to increase? A deeper analysis of this would be valuable.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer 1fKA (Part 4)\", \"comment\": \"```python\\ndef twobit_dequant_bmm_scale_kernel(\\n # Pointers to matrices\\n a_ptr,\\n b_ptr,\\n c_ptr,\\n scales_ptr,\\n # Matrix dimensions\\n M,\\n N,\\n K,\\n # The stride variables represent how much to increase the ptr by when moving by 1\\n # element in a particular dimension. E.g. `stride_am` is how much to increase `a_ptr`\\n # by to get the element one row down (A has M rows).\\n stride_am,\\n stride_ak,\\n stride_bk,\\n stride_bn,\\n stride_cm,\\n stride_cn,\\n stride_scales,\\n stride_batch_a,\\n stride_batch_b,\\n stride_batch_c,\\n stride_batch_scale,\\n # Meta-parameters\", \"block_size_m\": \"tl.constexpr,\", \"block_size_n\": \"tl.constexpr,\", \"block_size_k\": \"tl.constexpr,\", \"group_size_m\": \"tl.constexpr,\", \"activation\": \"tl.constexpr,\\n):\\n \\\"\\\"\\\"Kernel for computing the matmul C = A x B.\\n A has shape (B, M, K), float\\n B has shape (B, K//n_bits, N), int, packed boolean\\n C has shape (B, M, N),\\n scales is of shape (N) float16\\n \\\"\\\"\\\"\\n # -----------------------------------------------------------\\n # Map program ids `pid` to the block of C it should compute.\\n # This is done in a grouped ordering to promote L2 data reuse.\\n # See above `L2 Cache Optimizations` section for details.\\n pid = tl.program_id(axis=0)\\n pid_batch = tl.program_id(axis=1)\\n\\n num_pid_m = tl.cdiv(M, BLOCK_SIZE_M)\\n num_pid_n = tl.cdiv(N, BLOCK_SIZE_N)\\n num_pid_k = tl.cdiv(K, BLOCK_SIZE_K)\\n\\n num_pid_in_group = GROUP_SIZE_M * num_pid_n\\n group_id = pid // num_pid_in_group\\n first_pid_m = group_id * GROUP_SIZE_M\\n group_size_m = min(num_pid_m - first_pid_m, GROUP_SIZE_M)\\n\\n pid_m = first_pid_m + (pid % group_size_m)\\n pid_n = (pid % num_pid_in_group) // group_size_m\\n\\n offs_m = (pid_m * BLOCK_SIZE_M + tl.arange(0, BLOCK_SIZE_M)) % M\\n offs_n = (pid_n * BLOCK_SIZE_N + tl.arange(0, BLOCK_SIZE_N)) % N\\n\\n offs_am = tl.max_contiguous(tl.multiple_of(offs_m, BLOCK_SIZE_M), BLOCK_SIZE_M)\\n offs_bn = tl.max_contiguous(tl.multiple_of(offs_n, BLOCK_SIZE_N), BLOCK_SIZE_N)\\n offs_k = tl.arange(0, BLOCK_SIZE_K)\\n\\n a_ptrs = (\\n a_ptr\\n + (offs_am[:, None] * stride_am + offs_k[None, :] * stride_ak)\\n + pid_batch * stride_batch_a\\n )\\n\\n # Adapted from GPTQ-Triton (https://github.com/fpgaminer/GPTQ-triton)\\n # b_ptrs is set up such that it repeats elements along the K axis n_bits times\\n b_ptrs = (\\n b_ptr\\n + ((offs_k[:, None] // 16) * stride_bk + offs_bn[None, :] * stride_bn)\\n + pid_batch * stride_batch_b\\n )\\n scales_ptrs = scales_ptr + offs_bn * stride_scales + pid_batch * stride_batch_scale\\n\\n # (BLOCK_SIZE_K, BLOCK_SIZE_N)\\n # shifter is used to extract each bit of each element in the int matrix\\n shifter = (offs_k % 16) * 2\\n scales = tl.load(scales_ptrs)\\n\\n # -----------------------------------------------------------\\n # Iterate to compute a block of the C matrix.\\n # We accumulate into a `[BLOCK_SIZE_M, BLOCK_SIZE_N]` block\\n # of bf32 values for higher accuracy.\\n # `accumulator` will be converted back to bf16 after the loop.\\n accumulator = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=tl.float32)\\n for k in range(0, num_pid_k):\\n # Load the next block of A and B, generate a mask by checking the K dimension.\\n # If it is out of bounds, set it to 0.\\n a = tl.load(a_ptrs)\\n # b = tl.load(b_ptrs, mask=offs_k[:, None] < K - k * BLOCK_SIZE_K, other=0)\\n b = tl.load(b_ptrs) # (BLOCK_SIZE_N,)\\n\\n # Convert B from int to a.dtype\\n # b: (BLOCK_SIZE_K, BLOCK_SIZE_N)\\n b = (b >> shifter[:, None]) & 0x3\\n b = (b - 2).to(a.dtype)\\n b = b * scales[None, :] # bf16\\n # b = b.to(a.dtype)\\n\\n # We accumulate along the K dimension.\\n accumulator += tl.dot(a, b)\\n # Advance the ptrs to the next K block.\\n a_ptrs += BLOCK_SIZE_K * stride_ak\\n # b_ptrs += BLOCK_SIZE_K * stride_bk\\n b_ptrs += (BLOCK_SIZE_K // 16) * stride_bk\\n # You can fuse arbitrary activation functions here\\n # while the accumulator is still in bf32!\\n # if ACTIVATION == \\\"leaky_relu\\\":\\n # accumulator = leaky_relu(accumulator)\\n c = accumulator.to(tl.float16)\\n\\n # -----------------------------------------------------------\\n # Write back the block of the output matrix C with masks.\\n offs_cm = pid_m * BLOCK_SIZE_M + tl.arange(0, BLOCK_SIZE_M)\\n offs_cn = pid_n * BLOCK_SIZE_N + tl.arange(0, BLOCK_SIZE_N)\\n c_ptrs = (\\n c_ptr\\n + stride_cm * offs_cm[:, None]\\n + stride_cn * offs_cn[None, :]\\n + pid_batch * stride_batch_c\\n )\\n c_mask = (offs_cm[:, None] < M) & (offs_cn[None, :] < N)\\n tl.store(c_ptrs, c, mask=c_mask)\\n```\\n\\nWe have included the above pseudo-codes in Section C of the revised manuscript.\"}", "{\"title\": \"Response to all reviewers\", \"comment\": \"We sincerely thank all reviewers for their valuable comments.\", \"the_reviewers_agree_that\": \"### **Important problem**:\\n* \\u201cThe problem is well-motivated \\u2026 The problem of serving multiple fine-tuned models is a very poignant problem.\\u201d (Reviewer jYU9)\\n\\n### **Novel and effective method**:\\n* \\u201cThe application of reconstruction-based quantization to input model weights is novel. \\u201c (Reviewer 1fKA)\\n* \\u201cThe proposed idea makes sense.\\u201d (Reviewer qVf8)\\n* \\u201cThe author proposes an innovative and effective framework for improving the storage efficiency of serving multiple LLM experts.\\u201d (Reviewer Ldho)\\n\\n### **Promising performance**:\\n* \\u201cResults show minor to negligible losses in accuracy across many tasks, including some gains \\u2026 the reduction in memory usage as compared to other baselines is somewhat compelling.\\u201d (Reviewer 1fKA)\\n* \\u201cExperiments are comprehensive with convincing results.\\u201d (Reviewer jYU9)\\n* \\u201cThese experiments significantly strengthen the framework's methods ... the approach preserves accuracy competitively\\u201d (Reviewer Ldho)\\n\\n# General Response\\n\\n**Q1. Question regarding the model-level router.**\\n\\n**A1.** We sincerely appreciate the reviewers\\u2019 feedback and would like to take this opportunity to clarify our main contribution. Our work primarily addresses the critical challenge of **high storage demands when serving multiple LLMs**. As noted in L49-71, serving three versions of LLaMA-2-70B requires over 384GB of memory, posing a substantial memory bottleneck. To tackle this issue, we propose salient-aware delta compression, which selectively quantizes the non-salient input channels of the delta weights while keeping the salient ones unchanged. This approach significantly reduces storage requirements while preserving model performance. \\n\\nThe model-level router is a smaller, orthogonal component included to demonstrate the feasibility of automatically selecting the optimal model for each query, given the unpredictable nature of user input. In line with Reviewer qVf8\\u2019s suggestion, we have moved the model-level router part to the appendix to maintain the focus on our primary contribution in the main text.\\n\\n**Q2. Why is it assumed that no single model can master all tasks simultaneously, necessitating the use of multiple LLMs, each tailored for specific tasks?**\\n\\n**A2.** The current paradigm of LLMs generally follows a pretrain-finetune framework (Achiam et al., 2023; Team et al., 2023; Touvron et al., 2023; Jiang et al., 2023). These models are first pretrained on extensive and diverse datasets to acquire broad knowledge and capabilities, and then fine-tuned on specific downstream tasks to achieve alignment or specialization. For instance, even high-capacity LLMs like the MoE model Mixtral-8x22B are fine-tuned on instruction-following data to create specialized variants such as Mixtral-8x22B-Instruct-v0.1, enhancing their ability to follow human instructions. \\nWhile LLMs are powerful, fine-tuning for a specific task to enhance performance is generally more practical and efficient than multitask fine-tuning, which often encounters conflicting objectives, mode collapse, and demands meticulous data mixing along with substantial training resources [A][B]. For example, DeepSeek-Coder-V2-Base [C], a 236B-parameter MoE code model, is fine-tuned from DeepSeek-V2 [D] to achieve significantly improved performance in the code domain (90.2% vs. 48.8% on HumanEval) but demonstrating reduced effectiveness in general question-answering tasks (47.5% vs. 53.4% on NaturalQuestions). This highlights the necessity of obtaining multiple task-specific LLMs. We have included the discussions in the introduction of the revised manuscript.\\n\\n**Reference:**\\n\\n[A] Llama 2: Open foundation and fine-tuned chat models. arXiv 2023.\\n\\n[B] Gemini: a family of highly capable multimodal models. arXiv 2023.\\n\\n[C] DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence. arXiv 2024.\\n\\n[D] DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model. arXiv 2024.\\n\\n# Summary of changes\", \"we_have_revised_our_submission_and_summarized_our_updates_as_follows\": [\"We have added further discussions on our assumption that no single model can master all tasks simultaneously, highlighting the need for multiple specialized LLMs. (Reviewers 1fKA and jYU9)\", \"We have relocated the discussion of the model-level router to the appendix to maintain a clear focus on our primary contribution in the main text. (Reviewers 1fKA and qVf8)\", \"We have provided more empirical results in terms of 1) ME-Switch without salient-aware delta compression (Reviewer 1fKA); 2) more task domains (Reviewer 1fKA); 3) more saliency metric in delta compression. (Reviewer jYU9)\", \"We have included an additional pseudocode to illustrate the implementation of our efficient Triton kernel. (Reviewer 1fKA)\", \"We have updated the related work section to include a discussion of GPT-Zip. (Reviewer jYU9)\"]}", "{\"summary\": \"In this paper, the authors focus on the problem of serving multiple models (fine-tuned over\\nthe same base) together. The challenge is that each model occupies a lot of memory and \\nthe authors build on the idea of delta compression between the FT'ed models and the base.\\n\\nSpecifically, the authors proposed a mixed precision compression technique for the delta.\\n\\nMoreover, the authors proposed a routing-based method to pick the right model to use\\ngiven user input.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The proposed idea makes sense, and seems to work better than the baseline.\"], \"weaknesses\": [\"It would be great if both contributions can be evaluated more thoroughly against previous methods\", \"The second contribution could be put better into context\"], \"questions\": \"1. I am a little bit confused by how the router is related to the first contribution --\\nit would be great if the authors could elaborate more? these two parts look quite\\northogonal to me.\\n\\n2. There are a lot of previous work (especially for the router) tackling similar problems,\\nit seems that the authors only compared with the most natural baseline, but not\\nmore advanced methods. it would be great if the authors can justify it or provide more\\nresults.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper tackles the growing challenge of serving multiple fine-tuned LLMs while keeping memory usage in check. ME-Switch\\u2019s approach\\u2014using salient-aware delta compression to carefully quantize non-salient channels and preserve the important ones\\u2014feels genuinely promising. The experiments are generally well-conducted, and the authors have made a solid effort to broaden their evaluations into medical and legal domains, which is some what encouraging.\\n\\nStill, certain issues remain. The scope of evaluation could be expanded, and a deeper comparison with Mixture-of-Experts (MoE) methods would better contextualize the work\\u2019s impact. While the authors\\u2019 response did offer improvements, including moving the router details to the appendix and clarfying latency considerations, some core questions about generalization and scalability remain unanswered.\\n\\nGiven the borderline average score, the paper\\u2019s contribution seems strong but not quite ready for acceptance. I would urge the authors to strengthen the breadth of their evaluations, incorporate MoE comparisons, and provide a clearer narrative around how these techniques can scale. With these changes, I believe this work could become a valuable contribution to the community.\", \"additional_comments_on_reviewer_discussion\": \"The paper received split reviews (ratings from 3-6/10) with primary concerns focused on the model routing mechanism, evaluation thoroughness, and domain specialization assumptions. During rebuttal, authors clarified their main contribution was storage efficiency rather than routing (which was moved to appendix), provided additional experimental results including evaluations with new models (BioMistral-7B, Saul-7B-Base), and added detailed technical analyses. Two reviewers maintained their score while two reviewers did not change their scores.\"}", "{\"title\": \"Response to Reviewer 1fKA (Part 2)\", \"comment\": \"**Q4. Results are defined only on a very narrow set of task domains. This makes it quite unclear how the proposed approach generalizes. Domain-specific expert setups in a general setting might involve dozens of models in a large-scale system.**\\n\\n**A4.** To show how our salient-aware delta compression generalizes, we further include [BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) as an expert in the medical domain and [Saul-7B-Base](https://huggingface.co/Equall/Saul-7B-Base) as an expert in the legal domain. Both models are fine-tuned from [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). We evaluate their performance using subsets of MMLU corresponding to these domains. As shown in Table III, our salient-aware delta compression results in only a minor performance drop, maintaining nearly lossless performance. These results are included in Section 5.1 of the revised manuscript.\\n\\nTable III. Results on the medical and legal domain for the Mistral-7B family.\\n\\n| Method | Clinical Knowledge | Medical Genetics | Anatomy | Professional Medicine | College Biology | College Medicine | Avg. | International Law | Jurisprudence | Professional law | Avg. |\\n|:------:|:------------------:|:----------------:|:-------:|:---------------------:|:---------------:|:-----------------:|:------:|:-------------------:|:---------------:|:-----------------:|:------:|\\n| Baseline | 64.53 | 69.00 | 57.89 | 57.72 | 58.33 | 58.38 | 60.98 | 74.38 | 71.30 | 43.02 | 62.90 |\\n| Ours | 62.26 | 68.00 | 48.89 | 57.72 | 63.89 | 61.27 | 60.34 | 75.76 | 67.59 | 43.68 | 62.34 |\\n\\n**Q5. In general, the paper\\u2019s writing is redundant, i.e. the core premises, contributions, and prior work are presented multiple times each, with similar levels of depth. The authors could make room to do more experiments by removing redundant components.**\\n\\n**A5.** Thank you for your feedback. We have simplified the writing of our manuscript, moved the router section to the appendix, and included additional experiments in the main text.\\n\\n**Q6. The approach with which the router is trained generalizes very poorly. A router must be trained based on a collection of domain-specific models for every single setup. A router trained on a specific domain-specific set of multiple choice questions is quite specific to the input tasks.**\\n\\n**A6.** Please refer to Q1 of the general response. Additionally, while it is true that the router needs to be trained for each new setup, the associated training cost is manageable due to the small number of trainable parameters, allowing it to complete within a single GPU day.\\n\\n**Q7. There is no presentation of an accuracy-memory trade-off across approaches.**\\n\\n**A7.**. We would like to clarify that Figure 4 in the initial submission presents the accuracy-memory trade-off of different methods, directly addressing this concern. The results demonstrate that our method consistently outperforms other approaches across various bitwidths and tasks.\\n\\n**Q8. Introduction \\u2014 the authors might consider reframing motivations around MoE as well and listing memory pressure as a primary motivator for LLM task specialization.**\\n\\n**A8.** Thank you for the suggestion. We would like to highlight that our approach is focused on general-purpose LLMs rather than being tailored to MoE models, addressing the memory challenges associated with serving multiple specialized LLMs.\\n\\n**Q9. L78 the \\\"information loss\\\" that occurs with subsequent methods can be further specified. Is this something that affects downstream accuracy, or does this refer to something else?**\\n\\n**A9.** The \\u201cinformation loss\\u201d mentioned in L78 refers to the quantization error introduced by using per-tensor quantization (Liu et al., 2024a), as described in L76-80. This approach applies a single quantization step size ($s$ in Eq. (1)) for an entire layer, which limits its ability to capture fine-grained variations within the layer. Consequently, this quantization error can lead to a drop in final performance. To improve clarity, we have replaced the term \\u201cinformation loss\\u201d with \\u201cquantization error\\u201d in the revised manuscript.\\n\\n**Q10. Why doesn\\u2019t reconstruction loss merely end up choosing values which are closest to values which are fp16 quantized, i.e. representable with low approximation error? More analysis of reconstruction loss would be helpful to prove efficacy.**\\n\\n**A10.** As described in L208-215 and Eq. (2) of the manuscript, the reconstruction error is determined by both the input $\\\\mathbf{x}\\\\_i$ and the quantization noise $\\\\Delta\\\\_{ij} - \\\\hat{\\\\Delta}\\\\_{ij}$ (i.e., approximation error). Even when the quantization noise is small, the reconstruction error can be significant if the input magnitude is large.\"}", "{\"title\": \"Response to Reviewer Ldho (Part 1)\", \"comment\": \"Thanks for your valuable comments.\\n\\n**Q1. Although the router demonstrates strong classification ability for these domains, it would be worthwhile to explore its performance across more domains, particularly those with potential overlap. In such scenarios, MoE could leverage the combined knowledge of multiple experts, as it is fine-tuned based on a token-level router. A potential consequence is that, when adding a large number of experts to the MoE, fine-tuning may converge faster due to shared knowledge and parameters across domain experts. In contrast, a model-level router may struggle to distinguish between overlapping domain experts and always rely on a single expert, potentially causing overall performance degradation.**\\n\\n**A1.** We acknowledge that overlapping domains can present challenges for a model-level router. However, as mentioned in Q1 of the general response, our primary contribution lies in addressing the critical challenge of **high storage demands when serving multiple LLMs**. The model-level router is presented as a **simple exploration** on automating the switching process. Handling overlapping domains would require extending the current setup to a multi-label classification framework, which we consider a promising direction for future work.\\n\\nFurthermore, as discussed in L295-306, our method offers a cost-efficient solution for handling diverse user queries by requiring training only the model-level router given a set of well-trained expert models. In contrast, MoE models necessitate training not just the token-level router but also all network parameters. For example, training an MoE using existing Math, Code, and Wikipedia experts requires over 900 GPU days (as reported by Sukhbaatar et al., 2024), whereas our approach only requires training at the hours level, completing within a single GPU day, making it significantly more efficient. This efficiency difference stems from the distinct characteristics of experts in MoE models versus existing well-trained experts. In MoE models, each expert often acts as a generalist across all domains due to the constraints imposed by the load balancing loss (Fedus et al., 2022; Jiang et al., 2024). In contrast, existing well-trained experts typically specialize in specific domains, which align more naturally with our approach.\\n\\n**Q2. This approach requires the construction of a domain classification dataset, which can be inconvenient. For instance, when adding a new domain expert, the router would need to be re-trained on an updated domain dataset.**\\n\\n**A2.** While our approach requires re-training when introducing a new domain, we believe the associated overhead is manageable. The trainable parameters in our model-level router are significantly fewer compared to those in MoE, as mentioned in Q1.\\n\\n**Q3. While the approach demonstrates effectiveness compared to other quantization methods in ablation studies, the experiment results do not include a comparison with an end-to-end token-level router approach, such as MoE baselines.**\\n\\n**A3.** As discussed in L295-306, our method fundamentally differs from MoE. MoE employs token-level routing, which dynamically assigns individual tokens in a sequence to different experts within the same model. In contrast, our ME-Switch utilizes model-level switching, selecting the most suitable model to handle an entire user request. This distinction also significantly affects training overhead. As mentioned in Q1, training an MoE with multiple experts demands hundreds of GPU days, making a direct comparison challenging due to the limited computational resources available.\\n\\n**Q4. The comparison only covers up to 4 models, after which the naive implementation encounters out-of-memory (OOM) issues. It is a little subtle to conclude that latency is lower than the naive inference as the number of models is greater than 4.**\\n\\n**A4.** Our method scales more efficiently than the naive inference method. In the naive method, each $\\\\mathbf{x}\\\\mathbf{W}$ is computed independently during the forward pass, requiring a distinct $\\\\mathbf{W}$ for each user in the batch. As the number of models grows (>=4), this approach results in substantial I/O costs due to loading of large weight matrices. In contrast, our method leverages shared pre-trained model weights $\\\\mathbf{W}$ along with a set of small deltas $\\\\hat{\\\\Delta}$, significantly reducing the inference I/O burden. This is demonstrated in Figure 8 of our initial submission. We have included the discussion in the revised manuscript.\"}", "{\"summary\": \"This paper addresses the problem of serving multiple expert models i.e multiple models finetuned for different tasks from the same base model. Serving all fully finetuned models is expensive since they take up memory and switching these in and out of memory can be slow. Existing methods use low-rank finetuning to reduce the storage required for finetuned models, however low-rank finetuning does not match full finetuning in quality hence recent methods instead quantize the delta between the finetuned model and the base model. However, these quantization methods can lead to significant quantization errors at low bitwidths. This paper proposes a saliency aware method for quantizing delta between the finetuned and base model. In addition, this paper suggests a dynamical routing method to determine which expert model to route user requests to when the appropriate model for a user request is not known in advance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is generally well written and easy to follow, and the problem is well-motivated.\\n2. Experiments are comprehensive with convincing results and there are sufficient ablations to justify different design choices.\\n3. The problem of serving multiple finetuned models is a very poignant problem in this era of LLMs.\", \"weaknesses\": \"Major\\n1. How does the saliency based delta quantization technique in this paper differ from other non-magnitude based saliency quantization techniques like https://arxiv.org/pdf/2405.14917 ?\\n2. Model routing is a well studied problem! Are there other existing approaches that study model routing in the same scenario where the appropriate model is not known? How is your solution different or better than them ?\\n\\nMinor\\n1. Related work like GPT-Zip https://openreview.net/pdf?id=hO0c2tG2xL is not cited\\n2. On line 46, the paper claims that no single model can master all tasks simultaneously. While this is not currently the state of affairs is there a formal proof or evidence that this is impossible?\", \"questions\": \"1. Will the code for this be open source?\\n2. Are there any insights on why mixed precision quantization outperforms the full precision models in certain settings?\\n3. Is there an estimate of how often in practice is the appropriate model not know in advance ? Is this more common than the case where it is known?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your feedback\", \"comment\": \"Dear Reviewer Ldho,\\n\\nThank you for your thoughtful feedback and for acknowledging the contributions and vision of our work. We appreciate your recognition of the efficiency improvements our method offers over traditional MoE approaches, as well as your thoughtful suggestions regarding larger-scale comparisons and expanded performance analysis across more domains and models.\\n\\nYour insights are helpful, and we are grateful for your time and effort in reviewing our work.\\n\\nBest regards,\\n\\nAuthors of Paper #912\"}", "{\"summary\": \"The authors introduce ME-Switch, a memory-efficient framework for serving MoE-based LLMs. Given a set of MoE models across specified domains, they present a quantization technique for deltas from the base model which they argue is better than other approaches per its retention of accuracy and effectiveness in reducing model size. The approach attempts to estimate the saliency of particular weight deltas for quantization based on a reconstruction loss-based scheme As part of the assumption that models are domain-specific, the authors train a routing model to delegate user inputs to a domain-specific model.\\n\\nThe authors showcase retention of performance task performance across various domains in conjunction with a reduction in GPU memory usage attributed to efficient quantization.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The application of reconstruction-based quantization to input model weights is novel insofar as other methods (low rank approximation, uniform quantization) are not as effective.\", \"Results show minor to negligible losses in accuracy across many tasks, including some gains, though the degree to which these are due to noise is not immediately clear.\", \"The reduction in memory usage as compared to other baselines is somewhat compelling, although compared to baselines without any delta-based approaches, which the authors did not contribute.\"], \"weaknesses\": [\"The assumption that each LLM specializes in a distinct domain is a large one, and limits the general applicability of the authors' approach. Indeed, in many MoE-based settings, model diversity is sufficient to improve performance significantly, and task specialization is not even considered. The given assumption also requires training a domain-based router, which itself requires ablations. The authors need to argue that this setting is representative and useful.\", \"There are missing baselines. The sensitivity and quality of the domain-based routing setup as compared to a baseline without adaptive quantization makes it difficult to disambiguate the effect of improved quantization alone.\", \"Similarly, there is no baseline that foregoes SFT or one that compares SFT without quantization or task routing. These components should be independently ablated.\", \"Results are defined only on a very narrow set of task domains. This makes it quite unclear how the proposed approach generalizes. Domain-specific expert setups in a general setting might involve dozens of models in a large-scale system.\", \"In general, the paper\\u2019s writing is redundant, i.e. the core premises, contributions, and prior work are presented multiple times each, with similar levels of depth. The authors could make room to do more experiments by removing redundant components.\", \"The approach with which the router is trained generalizes very poorly. A router must be trained based on a collection domain-specific models for every single setup. A router trained on a specific domain-specific set of multiple choice questions is quite specific to the input tasks.\", \"There is no presentation of an accuracy-memory tradeoff across approaches. This, combined with weak baselines reduces the strength of the contribution.\"], \"questions\": [\"Introduction \\u2014\\u00a0the authors might consider reframing motivations around MoE as well and listing memory pressure as a primary motivator for LLM task specialization.\", \"Line 78 the \\\"information loss\\\" that occurs with subsequent methods can be further specified. Is this something that affects downstream accuracy, or does this refer to something else?\", \"Why doesn\\u2019t reconstruction loss merely end up choosing values which are closest to values which are fp16 quantized, i.e. representable with low approximation error? More analysis of reconstruction loss would be helpful to prove efficacy.\", \"Why is fp16 used throughout the experiments setups rather than bf16, which is supported on A100s (used in the paper) and is far more common in inference settings?\", \"What is the effect of using different backbones for the same task with the proposed approach?\", \"What additional details can be shared about the Triton kernel? What is the performance benefit of its inclusion? How is the implementation structured?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 1fKA (Part 3)\", \"comment\": \"**Q11. Why is fp16 used throughout the experiment setups rather than bf16, which is supported on A100s (used in the paper) and is far more common in inference settings?**\\n\\n**A11.** Thank you for pointing this out. We indeed use BF16 in our experiments. This has been clarified and included in the implementation details in Section 5 of the revised manuscript.\\n\\n**Q12. What is the effect of using different backbones for the same task with the proposed approach?**\\n\\n**A12.** We would like to clarify that we have conducted experiments on the instruction domain using the Mistral-7B, LLaMA-2-13B, and LLaMA-3-8B model families. The results, presented in Tables 1 and B of the initial submission, demonstrate that our method effectively adapts to different backbones while maintaining strong performance across tasks.\\n\\n**Q13. What additional details can be shared about the Triton kernel? What is the performance benefit of its inclusion? How is the implementation structured?**\\n\\n**A13.** The Triton kernel is designed to accelerate model inference. Without it, $\\\\tilde{\\\\Delta}$ must be loaded from high-bandwidth memory (HBM) into SRAM, dequantized into BF16 format, written back to HBM, and then reloaded for multiplication with $\\\\mathbf{x}$. Our Triton kernel fuses dequantization and multiplication into a single step, reducing intermediate memory operations and eliminating unnecessary data transfers, resulting in faster inference. To provide a clearer understanding, we include the core pseudo-codes of the Triton kernel as follows:\"}", "{\"title\": \"Follow-up on Rebuttal\", \"comment\": \"Dear Reviewer qVf8\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper. We have carefully addressed your concerns and provided detailed responses, which we hope have resolved your queries. If you have any additional questions or further concerns, please do not hesitate to let us know.\\n\\nBest regards,\\n\\nAuthors of #912\"}", "{\"title\": \"Response to Reviewer 1fKA (Part 1)\", \"comment\": \"Thanks for your constructive comments.\\n\\n**Q1. The assumption that each LLM specializes in a distinct domain is a large one, and limits the general applicability of the authors' approach. Indeed, in many MoE-based settings, model diversity is sufficient to improve performance significantly, and task specialization is not even considered. The authors need to argue that this setting is representative and useful.**\\n\\n**A1.** We would like to emphasize that our approach is designed for general-purpose LLMs rather than being tailored to MoE models. As discussed in L295-300, MoE employs **token-level routing** that assigns individual tokens in a sequence to different experts within the same model, whereas our ME-Switch uses **model-level switching** to select the most suitable model for an entire user request. For a detailed discussion regarding the assumption that each LLM specializes in a distinct domain, please refer to Q2 of the general response.\\n\\n**Q2. There are missing baselines. The sensitivity and quality of the domain-based routing setup as compared to a baseline without adaptive quantization makes it difficult to disambiguate the effect of improved quantization alone.**\\n\\n**A2.** We present the results of ME-Switch without applying our salient-aware delta compression in Tables I and II. The results demonstrate that both salient-aware delta compression alone and model-level routing alone achieve performance comparable to their respective unquantized expert models across a variety of downstream tasks. We have included the results in Section D of the revised manuscript.\\n\\nTable I. Performance comparison of different methods for the Mistral-7B family. \\\"Baseline\\\" refers to the unquantized experts, while \\\"SADC\\\" represents our salient-aware delta compression.\\n\\n| Method | STEM | Hums. | Social | Other | Avg. | GSM8K | Math | Avg. | HumanEval | MBPP | Avg. |\\n|:------------------:|:-----:|:-----:|:------:|:-----:|:-----:|-------|-------|-------|-----------|-------|-------|\\n| Baseline | 52.05 | 68.83 | 73.42 | 65.43 | 63.43 | 73.92 | 20.62 | 47.27 | 51.20 | 60.40 | 55.80 |\\n| Router Only | 52.05 | 68.83 | 73.42 | 65.43 | 63.43 | 74.15 | 20.72 | 47.44 | 51.20 | 60.40 | 55.80 |\\n| SADC Only | 53.17 | 69.09 | 73.88 | 65.40 | 63.95 | 73.62 | 20.48 | 47.05 | 51.80 | 60.70 | 56.25 |\\n| SADC + Router | 51.49 | 68.37 | 73.60 | 66.08 | 63.32 | 73.39 | 20.30 | 46.85 | 51.80 | 60.70 | 56.25 |\\n\\nTable II. Performance comparisons of different methods for the LLaMA-2-13B family. \\\"Baseline\\\" refers to the unquantized experts, while \\\"SADC\\\" represents our salient-aware delta compression.\\n\\n| Method | STEM | Hums. | Social | Other | Avg. | GSM8K | Math | Avg. | C-Eval | C-MMLU | Avg. |\\n|:------------------:|:-----:|:-----:|:------:|:-----:|:-----:|:-------:|:------:|:-------:|:-----------:|:-------:|:-------:|\\n| Baseline | 44.26 | 59.79 | 63.20 | 56.57 | 54.60 | 69.14 | 8.48 | 38.81 | 40.28 | 39.16 | 39.72 |\\n| Router Only | 44.17 | 59.73 | 63.20 | 56.57 | 54.55 |68.61 | 8.52 | 38.57 | 40.28 | 39.16 | 39.72 |\\n| SADC Only | 44.57 | 60.87 | 64.00 | 58.04 | 55.45 | 70.05 | 13.20 | 41.63 | 40.13 | 39.91 | 40.02 |\\n| SADC + Router | 44.51 | 60.87 | 64.00 | 58.04 | 55.43 | 69.90 | 13.14 | 41.52 | 40.13 | 39.84 | 39.99 |\\n\\n**Q3. There is no baseline that foregoes supervised fine-tuning (SFT) or one that compares SFT without quantization. These components should be independently ablated.**\\n\\n**A3.** Please refer to Q1 in the general response.\"}", "{\"title\": \"Thanks for your feedback\", \"comment\": \"Dear Reviewer jYU9,\\n\\nThank you for your feedback! We greatly appreciate the constructive reviews and valuable suggestions to enhance our work.\\n\\nBest regards,\\n\\nAuthors of Paper #912\"}", "{\"title\": \"Friendly Reminder: Approaching Discussion Deadline\", \"comment\": \"Dear Reviewer qVf8,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper. As the discussion period is nearing its end, we wanted to kindly check if our responses have sufficiently addressed your concerns. If there are any remaining issues, we would be happy to clarify further.\\n\\nThank you again for your valuable feedback and time.\\n\\nBest regards, \\n\\nAuthors of #912\"}" ] }
CKYsXi0dOV
BLIP-3-Video: You Only Need 32 Tokens to Represent a Video Even in VLMs
[ "Michael S Ryoo", "Honglu Zhou", "Shrikant Kendre", "Can Qin", "Le Xue", "Manli Shu", "Silvio Savarese", "Ran Xu", "Caiming Xiong", "Juan Carlos Niebles" ]
We present BLIP-3-Video, a multimodal language model for videos, particularly designed to efficiently capture temporal information over multiple frames. BLIP-3-Video takes advantage of the `temporal encoder' in addition to the conventional visual tokenizer, which maps a sequence of tokens over multiple frames into a compact set of visual tokens. This enables BLIP-3-Video to use much fewer visual tokens than its competing models (e.g., 32 vs. 4608 tokens). We explore different types of temporal encoders, including learnable spatio-temporal pooling as well as sequential models like Token Turing Machines. We experimentally confirm that BLIP-3-Video obtains video question-answering accuracies comparable to much larger state-of-the-art models (e.g., 34B), while being much smaller (i.e., 4B) and more efficient by using fewer visual tokens.
[ "video representation", "video foundation model", "vlm", "multimodal language model" ]
Reject
https://openreview.net/pdf?id=CKYsXi0dOV
https://openreview.net/forum?id=CKYsXi0dOV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yvNcVtFkNZ", "x953jlKd0V", "vvb4ObZxeN", "uJsklavG3V", "qaBVhC7D7s", "o5Q8CHEBqk", "nwEg1y3MGY", "lCJXTcYIaZ", "kjjusrVFRh", "j9GN5KOdCv", "j4maZl86Iv", "e754YnoL9A", "aMhWC9yGvh", "a8lxeIARgV", "ZVK78HPXoD", "XoXcBg20Qz", "WVSpWE6Zv9", "V3xPFUb2n3", "TXgy923AeL", "Rzx4kSUejz", "RaOx1M8KRt", "N2Cr3GI1x6", "INLB4uD7ux", "GA62mppCP8", "FVGFVCy5Ft", "EqTfe3TBIj", "CiNMlmSZS2", "1wna5KygAD", "0ACxoK2IcD" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1730699935363, 1732610195161, 1732385955173, 1732647322879, 1732342182146, 1732508696653, 1732756055997, 1732543525107, 1732385739869, 1731253782276, 1732330624075, 1732539174679, 1734886896070, 1732646897010, 1730102898619, 1732647231476, 1732385827017, 1732513958994, 1732330728877, 1732385766241, 1732334957903, 1739468224007, 1732334900215, 1732342246201, 1730555422495, 1732334719713, 1732342280380, 1737523692098, 1732681916770 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5214/Reviewer_hUbh" ], [ "ICLR.cc/2025/Conference/Submission5214/Reviewer_7c93" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Reviewer_hUbh" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Reviewer_gZaZ" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Reviewer_yYZF" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Reviewer_yYZF" ], [ "ICLR.cc/2025/Conference/Submission5214/Area_Chair_ETK1" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Reviewer_7c93" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "~Michael_S_Ryoo1" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Reviewer_gZaZ" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Submission5214/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5214/Reviewer_7c93" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces BLIP-3-Video, a multimodal vision language model which demonstrates strong performance on video understanding with high token accuracy. BLIP-3-Video uses as few as 16~32 tokens to encode an entire video sequence, which is highly efficient to other video VLMs, enabled by the incorporation of a temporal encoder. BLIP-3-Video achieves competitive accuracy on various video question-answering benchmarks while having smaller model parameters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The temporal encoder which aggregates visual tokens across frames in a highly efficient manner results in computational efficiency of training the model.\", \"Extensive ablation study in the temporal encoder design validates the design choice while also demonstrating the flexibility in the design of temporal encoder.\", \"Competitive performance on various video question-answering benchmarks, despite its smaller size.\"], \"weaknesses\": [\"The model proposed in the paper utilizes 8 frames per video which are uniformly sampled. This approach might not work for tasks that inherently require more than 8 frames to understand the video. If this method could scale up, an explanation of why that might be would be helpful.\", \"The experiments of the paper focuses on video question-answering benchmarks only, and this limited experimentation may not capture the model's ability in other video-based tasks. Further evaluation on other video tasks, such as temporal understanding would demonstrate the applicability of this approach to more general and diverse video-related tasks.\"], \"questions\": [\"The (video part of the) training of this model is on video captioning data and video question-answering datasets. If the downstream task were to change to a more complex task, like temporal reasoning, would the model require more tokens or would 16~32 still be sufficient? i.e. is there enough visual information encoded in the 16~32 tokens?\", \"In addition, if the downstream task requires remembering multiple details and nuanced events over a long diverse scenario, how would this approach perform? Is there a built-in mechanism that prevents information loss during token pooling?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for the detailed and well-reasoned responses, which have addressed some of my concerns. Consider incorporating recent advances in spatiotemporal token compression, such as LongVU, to enhance your architectural design instead of relying solely on the current framework. That said, I do not dismiss the substantial experimental contributions and technical robustness of this work. Overall, I find the insights this paper offers to video understanding researchers somewhat limited. Therefore, I am willing to increase my score from 5 to 6 but see no sufficient justification for a higher score.\\n\\n[1] Shen, Xiaoqian, et al. \\\"LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding.\\\" arXiv preprint arXiv:2410.17434 (2024).\"}", "{\"title\": \"Response to reviewer 7c93 (4/4)\", \"comment\": \"Q4.\\n> \\\"Comparison with Other Efficient Models: How does BLIP-3-Video compare with other recent models that also focus on efficiency, such as those employing knowledge distillation or sparse attention mechanisms? Could the authors provide some insights into the trade-offs involved? Please provide persuasive evidence from experimental studies.\\\"\\n\\nKnowledge distillation and sparse attention mechanisms certainly are interesting research directions to build an efficient, compact model. We believe such directions are orthogonal and complementary to ours. If there is an approach to distill a larger model into a smaller model or enable efficient attention, we believe our model could be extended to incorporate them. In this paper, what we focus on is the role of the temporal encoder and the observation that it is an effective component to abstract tokens over frames. For more detailed discussion, it will be very helpful to hear from the reviewer which efficient video-based VLM models the reviewer has in mind.\\n\\n\\nQ5.\\n> \\\"Novelty: Compared to LLaMA-VID, where is the core novelty of this paper? Although the experimental results show that 32 tokens achieved better performance on four short video benchmarks, this standard will change with different video lengths, video scenarios, and the complexity of question answering. The scalability and generalizability of this method are questionable. Perhaps a more effective mechanism for accommodating more frames and selecting key information for video question answering from a large number of visual tokens is worth exploring, rather than determining the specific numerical value of a visually overfitted token on a few benchmarks. Similar architectures have been explored enough in a series of works such as Video-LLaVA, LLaVA-VID, LLaVA-NEXT, and so on.\\\"\\n\\nThe key difference between LLaMA-VID and BLIP-3-Video\\u2019s token reduction is the existence of *temporal* encoder. LLaMA-VID token reduction is strictly per-frame; it only considers tokens within one frame to reduce them to 2 per frame, making it only spatial. On the other hand, our temporal encoder mechanism jointly looks at the tokens in the video across space and time. This allows BLIP-3-Video to more dynamically select informative tokens across all the frames. In an extreme case, BLIP-3-Video can learn to select 0 tokens from uninformative frames and 10+ tokens from another frame with important details.\\n\\nThis contributes to the better accuracy of BLIP-3-Video compared to LLaMA-VID in all of the datasets we tried. The table below summarizes their comparisons.\\n\\n| Dataset | LLaMA-VID | BLIP-3-Video |\\n| --- | --- | --- |\\n| MSVD-QA | 70.0 / 3.7 | 77.1 / 4.2 |\\n| MSRVTT-QA | 58.9 / 3.3 | 60.0 / 3.6 |\\n| ActivityNet-QA | 47.5 / 3.3 | 55.7 / 3.5 |\\n| VideoInstruct | 2.89 | 3.11 |\\n| TempCompass (y/n) | 52.96 | 66.7 |\\n| MVBench | 41.4 | 54.9 |\\n\\nAlso notice that BLIP-3-Video (3.9B) achieves superior accuracy to LLaMA-VID (7B or 13B) while using a smaller LLM. We also tried an ablation against such a per-frame token reduction strategy (similar to LLaMA-VID) in Table 4, and obtained a similar observation. Our space-time token reduction enables much better performance.\\n\\nAnother thing we would like to clarify is that the temporal encoder we are introducing in BLIP-3-Video itself has the capability to abstract tokens over any number of frames. Our *grouped Token Turing Machine* temporal encoder is a 'sequential model'. It in principle is able to sequentially (iteratively) digest tokens from a continuous stream of video frames. The current BLIP-3-Video\\u2019s design of taking a fixed number of frames (8 frames or 16 frames) originates from the limitation in our training hardware, but we believe we are showing the potential of the temporal encoder mechanism that could benefit future model designs. We believe it is much more generic than per-frame token reduction used in prior works like LLaMA-VID. Video-LLaVA and LLaVA-NEXT do not have any temporal encoder or token reduction mechanism over space-time either.\"}", "{\"comment\": \"We are glad to know that all the concerns have been resolved except about the novelty.\\n\\nAs Reviewer hUbh also mentioned, we believe we have meaningful contributions and important observations (impact of token reduction, role of temporal encoder, ...) to share with the research community. We believe this will benefit and motivate future model designs by other researchers. We hope the reviewer will consider them when increasing the rating.\"}", "{\"title\": \"Response to reviewer hUbh (1/3)\", \"comment\": \"We thank the reviewer for the comments. Please find our answers below.\\n\\n> 1. \\u201cThe model proposed in the paper utilizes 8 frames per video which are uniformly sampled. This approach might not work for tasks that inherently require more than 8 frames to understand the video. If this method could scale up, an explanation of why that might be would be helpful.\\u201d\\n\\nWe thank the reviewer for raising the concern. The number of frames our model takes is a hyperparameter, and we are able to train a model that takes more frames as an input when necessary. In order to confirm that our model has a capability to digest a larger number of frames and still abstract each video into 32 (or 128) tokens, we trained BLIP-3-Video with 16 frames.\\n\\nThe below table shows the trend.\\n\\n| # frames | # tokens | NExT-QA | ActivityNet-QA |\\n| --- | --- | --- | --- | \\n| 8 frames | 32 tokens | 76.4 | 55.7 / 3.5 |\\n| 8 frames | 128 tokens | 77.1 | 56.7 / 3.6 |\\n| 16 frames | 32 tokens | 76.7 | 55.9 / 3.5 |\\n| 16 frames | 128 tokens | 77.6 | 57.3 / 3.6 |\\n\\nEven while maintaining the number of tokens, we are able to observe that providing more frames in the input allows BLIP-3-Video to scale to better performance. We believe this is due to the fact that increasing the number of frames has an effect of increasing the size of the \\\"pool\\\" of tokens the temporal encoder can select from. We believe this trend (i.e., our model accuracy increasing as the number of frames increases) will continue until it saturates. \\n\\nWe have not observed much sign of information loss with the datasets we tried, including ActivityNet-QA whose average duration is 111 seconds and NExT-QA whose duration is 44 seconds. What we confirm in this paper is that our proposed architecture with temporal encoder could be a useful concept/component for properly representing video clips of such durations.\\n\\nAnother thing we would like to mention is that our model is capable of handling any frames provided as an input (i.e., they don\\u2019t need to be uniformly sampled). Frame selection research would be orthogonal and complementary to our work, and they can be easily combined within BLIP-3-Video. The focus of this paper is on confirming the potential of the temporal encoder to capture necessary information in the given frames (and what frames to give is complementary). If a frame selection algorithm is incorporated, we expect its even better scaling capability.\"}", "{\"title\": \"Great rebuttal\", \"comment\": \"I thank the authors for their detailed rebuttal. I have carefully read other reviews and the rebuttals. I appreciate their detailed response and additional experiments.\\n\\nTo clarify, by no means was I trying to disregard their contributions/observations in the paper, but I felt the paper would become much stronger if they could provide their insights into some of the points I mentioned (e.g. number of frames, frame selection, complex motions, etc.) \\n\\nI believe that the insights the authors provide will be a valuable contribution to the community, and that BLIP-3 will be a promising direction of future research, given their efficiency in parameter utilization.\\n\\nHence I increase my score and confidence accordingly.\"}", "{\"comment\": \"We plan to include the experimental results and discussions in the revised paper.\\n\\nOur submitted paper also mentions that the code will be open sourced together with the final version of the paper.\"}", "{\"comment\": \"Thank you for efforts and detailed rebuttal and all my concerns are comprehensively addressed. I have carefully read the author's response as well as the feedback from other reviewers and I will maintain my borderline acceptance score as it is.\\n\\nIt would be great that the authors to investigate the variant with variable visual token numbers on demand in the future version, as mentioned in Q1. In my view attentional pooling or grouped ttm can naturally manage variable video tokens.\"}", "{\"title\": \"Response to reviewer 7c93 (1/4)\", \"comment\": \"We thank the reviewer for the comments. Please find our answer to address the concerns below.\\n\\n1. \\n> \\\"Diversity of Datasets: The experiments primarily rely on a limited set of public benchmarks for evaluation. Expanding the evaluation to include a more diverse range of benchmarks, particularly those with varying lengths and complexities of videos, could provide a more comprehensive assessment of the model's generalizability and robustness.\\\"\\n\\nFollowing the suggestions from the reviewers, we evaluated our model on multiple additional datasets: MVBench, TempCompass, and VideoInstruct.\\n\\nWe find BLIP-3-Video quite competitive in all these benchmarks, particularly considering its size (3.9B) and the number of visual tokens (often less than 1/20 of the others). Notably, we performed quite a bit better than LLaMA-VID-7B, which also uses 32 visual tokens like ours (and has a larger LLM).\\n\\nBLIP-3-Video\\u2019s result on MVBench is as below. Notably, it is the 2nd best among the models not taking advantage of the MVBench-provided training dataset (VideoChat2-IT).\\n\\n| Model | # tokens | {VideoChat2-IT training} | MVBench Accuracy |\\n| --- | --- | :---: | --- |\\n| PLLaVA (7B) | 576+ | Y | 46.6 |\\n| VideoLLaMA2 (7B) | 1152 | Y | 54.6 |\\n| ST-LLM (7B) | 256 | ~Y | 54.9 |\\n| PPLLaVA (7B) | 1024 | ~Y | 59.2 |\\n| VideoChat2-Mistral (7B) | 96 | Y | 60.4 |\\n| Kangaroo (8B) | ~10000 | Y | 61.1 |\\n| Tarsier (7B) | 4608+ | ~Y | 62.6 |\\n| | | | |\\n| VideoChatGPT (7B) | 264+ | N | 32.7 |\\n| VideoLLaMA (7B) | 32 | N | 34.1 |\\n| VideoChat (7B) | 32 | N | 35.5 |\\n| LLaMA-VID (7B) | 32 | N | 41.4 |\\n| Video-LLaVA (7B) | 2048 | N | 43.5 |\\n| mPLUG-Owl3 (8B)| n/a | N | 54.5 |\\n| **BLIP-3-Video (3.9B)** | 32 | N | 54.9 |\\n| LLaVA-OneVision (7B) | 3136 | N | 56.7 |\\n\\nOur result compared to SOTA on the TempCompass benchmark is as below.\\n\\n| Model | Yes/No QA | Caption matching |\\n| --- | --- | --- | \\n| GPT-4o | 73.66 | 80.84 |\\n| Qwen2-VL-7B-Instruct | 72.77 | 77.31 |\\n| Gemini-1.5-pro | 70.32 | 77.45 |\\n| LLaVA-OneVision-Qwen-2-7B | 69.67 | 73.79 |\\n| LLaVA-NeXT-Video-32B-Qwen | 69.38 | 76.51 |\\n| InternVL2-8B | 68.24 | 77.11 | \\n| **BLIP-3-Video (3.9B)** | 66.7 | 66.5 |\\n| Llama-3-VILA1.5-8B | 63.64 | 68.93 |\\n| LongVA-7B | 62.13 | 65.67 |\\n| LLaVA-NeXT-Video-7B-DPO | 61.19 | 63.01 |\\n| VideoChat2-vicuna-stage3 | 58.01 | 53.69 |\\n| LLaVA-1.5-13B | 56.38 | 64.27 |\\n| Video-LLaVA-7B | 56.38 | 63.34 | 63.34 |\\n| Video-LLaMA-2-13B | 53.73 | 54.16 |\\n| LLaMA-VID-7B-short-video | 52.96 | 56.02 |\", \"videoinstruct_benchmark_evaluation_also_gave_us_similar_results\": \"| Model | VideoInstruct accuracy |\\n| --- | --- |\\n| PLLaVA-34B | 3.32 |\\n| SlowFast-LLaVA-34B | 3.32 |\\n| VideoGPT+ | 3.28 |\\n| ST-LLM-7B | 3.15 |\\n| **BLIP-3-Video (3.9B)** | 3.11 |\\n| VideoChat2_HD_mistral | 3.10 |\\n| LITA-13B | 3.04 |\\n| LLaMA-VID-13B | 2.99 |\\n| VideoChat2 | 2.98 |\\n| LLaMA-VID-7B | 2.89 |\\n| Video-ChatGPT | 2.38 |\\n\\nWe believe BLIP-3-Video performs very reasonably on all these benchmarks, considering its smaller size and its use of much fewer visual tokens. \\n\\nAlso, we would like to highlight that we evaluated our model with a diverse set of datasets with different video durations. As shown in the table below, they range from very short video clips to longer clips.\\n\\n| Dataset | # of videos | Average duration (sec.) |\\n| --- | --- | --- |\\n| TGIF-QA | 165,165 | 3 |\\n| MSVD-QA | 13,157 | 10 |\\n| MSRVTT-QA | 72,821 | 15 |\\n| MVBench | 4,000 | 16 |\\n| NExT-QA | 52,044 | 44 |\\n| ActivityNet-QA | 800 | 111 |\"}", "{\"summary\": \"This paper presents BLIP-3-Video, which introduces a \\\"temporal encoder\\\" alongside a conventional visual tokenizer, allowing it to significantly reduce visual tokens (32 tokens compared to thousands in other models). The study explored various temporal encoders, including learnable spatio-temporal pooling and sequential models like token turning machines (TTM). Detailed experiments showed that different encoder types had a noticeable impact on performance, particularly in handling complex video scenarios. Experimental results show that BLIP-3-Video achieved video question-answering accuracies comparable to much larger state-of-the-art models while being smaller and more efficient.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. An impressive tradeoff between efficiency and accuracy on the MSVD-QA benchmark.\\n2. Extensive explorations on temporal encoders to reduce visual tokens.\\n3. The paper is well written and easy to follow.\", \"weaknesses\": \"1. Novelty: The primary weakness is the insufficient novelty. As detailed in Section 2.2, the only improvements to TTM include (1) time-stamped positional encodings and (2) a 'grouped' TTM temporal encoder. These minor changes do not substantiate a significant contribution.\\n\\n2. Evaluation Benchmarks: The evaluated benchmarks are unconvincing for assessing Video LMMs. The model was only evaluated on MSVD-QA, MSRVTT-QA, ActivityNet-QA, TGIF-QA, and NExT-QA, which are not so ideal for testing LMMs. The authors may consider newer benchmarks like VideoMME and MVBench, which are proposed for assessing Video LMMs.\", \"questions\": \"1. What novel designs does this method introduce compared to TTM? Are there ablation studies for these designs?\\n\\n2. The model utilizes the VideoChatGPT instruction set. Why hasn't it been evaluated on that benchmark?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer yYZF (1/3)\", \"comment\": \"We thank the reviewer for the comments. Please find our answers to the reviewer's comments below.\\n\\n> 1. \\\"Novelty: The primary weakness is the insufficient novelty. As detailed in Section 2.2, the only improvements to TTM include (1) time-stamped positional encodings and (2) a 'grouped' TTM temporal encoder. These minor changes do not substantiate a significant contribution.\\\"\\n\\nAlthough TTM was introduced in the previous work, we find our extensions very crucial and impactful, particularly for video-based VLMs. We observe that the original TTM (as it is) does not perform competitively within the VLM, and that our extension improves this significantly.\", \"please_find_the_ablation_result_table_comparing_different_extensions_of_the_ttm_below\": \"| Temporal encoder | MSVD-QA | TGIF-QA | ActivityNet-QA | NExT-QA |\\n|-----------------------------------------------|-----------|-----------|---------|----------|\\n| Original TTM | 76.42 / 4.15 | 75.80 / 4.26 | 54.45 / 3.48 | 75.42 |\\n| TTM + time-stamp | 76.43 / 4.16 | 76.44 / 4.29 | 56.15 / 3.53 | 75.96 |\\n| TTM + grouping | 76.99 / 4.17 | 77.05 / 4.30 | 55.92 / 3.54 | 76.46 |\\n| Ours (time-stamp + grouping) | 77.29 / 4.18 | 77.10 / 4.31 | 56.66 / 3.56 | 77.07 |\\n\\nWhat we introduce in this paper is the extended TTM that actually works well within a multimodal LLM, unlike previous work.\\n\\nIn addition, we believe that our paper is the first paper to successfully extend TTM for this significant token reduction within a video-based VLM. TTM was originally used with a much smaller Transformer (ViT), and its token reduction was only down to 16 per frame/timestep. Mirasol3B tried TTM within a VLM, but even in Mirasol3B, the tokens were reduced to 32 per timestep. In our case, the tokens are reduced to 2-4 per timestep (i.e., 16-32 tokens total).\\n\\nFinally, we would like to emphasize once more that most of the video-based VLMs omit the usage of temporal encoders entirely. Compared to these prior work without any temporal encoder (e.g., Tarsier, LLaVA-OneVision, \\u2026), we believe we are providing an efficient and effective architecture to summarize tokens over time. We are exploring the use of temporal encoders, a sequential model in particular, at this scale almost for the first time.\\n\\nWe also emphasize that BLIP-3-Video is one of the first compact video models (3.9B) which obtains competitive performance to much larger SOTA models. We believe the findings of this paper (e.g., token reduction with temporal encoders) can further benefit future model designs in the research community.\"}", "{\"comment\": \"Thanks for the rebuttal. The performance on the VideoMME and VideoChatGPT benchmarks looks good. I will increase my rating. However, like reviewer 7c93, my concern about the novelty is not fully resolved. It is hard to identify \\\"+grouping, +timestamp\\\" as novelties; they seem incremental.\"}", "{\"metareview\": \"This paper provides BLIP-3-Video, a Video LLM that build on the previous BLIP-3 architecture but focuses on incorporating temporality into the architecture by learning spatio-temporal pooling to obtain video representations in only 32 tokens. The paper is clearly written, provides a thorough analysis of various pooling strategies and tackles a topic of high interest to the community.\\nThe paper's key weakness is the limited novelty, it is rather an exploration of architectures. Especially compared to TTM, the novelty, as also stated by the authors is only the addition of grouping and time-stamp. For this reason, the AC recommends the rejection of the paper as its key contributions don't warrant a full paper at ICLR.\", \"additional_comments_on_reviewer_discussion\": \"The comment to the AC has been considered. Overall reviewers raised points about novelty (yYZF, 7c93), scalability questions (7c93), flexibility in output tokens after training (gZaZ), and further evaluations on other tasks, including ones that require longer memory (hUbh). These points were discussed in a thorough set of discussions with the authors, and mainly points about novelty remaining. It is a difficult borderline paper and the key to the decision by the AC has been the overall lack of core technical contribution that would warrant a full ICLR paper.\"}", "{\"comment\": \"Thank you for bringing this very recent paper (LongVU) into our attention. We find that it was released on arXiv after we submitted our paper to ICLR.\\n\\nThe frame selection used in LongVU certainly seems interesting and complementary to ours. The main difference between our approach and LongVU for the token compression is the temporal token abstract/merging over time. LongVU introduces a method to compress tokens in every frame spatially (i.e., within the same frame) while conditioning it on the first frame. On the other hand, our temporal encoder specializes in combining tokens over time (in all the frames), using the sequential model - grouped TTM.\\n\\nWe believe these research works are complementary, focusing on different aspects of the problem. They will both benefit future research.\"}", "{\"summary\": \"The paper introduces BLIP-3-Video, a novel multimodal language model designed for video understanding that efficiently captures temporal information across frames. A key innovation is the integration of a 'temporal encoder' that maps a sequence of tokens from multiple frames into a compact set of visual tokens, allowing BLIP-3-Video to operate with significantly fewer visual tokens compared to its competitors. The model explores various temporal encoders, including learnable spatio-temporal pooling and sequential models like Token Turing Machines. Experiments demonstrate that BLIP-3-Video achieves comparable video question-answering accuracies to much larger models, while being more efficient and smaller in size due to its reduced token usage. The paper also details the model's architecture, training recipe, and experimental results, highlighting the effectiveness of the temporal encoder in representing videos for question-answering tasks with a small number of tokens.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"**Originality**: It introduces an innovative temporal encoder that significantly reduces the number of visual tokens needed to represent videos, offering a new approach to efficiency in video understanding models.\", \"**Quality**: The model is thoroughly evaluated against state-of-the-art benchmarks, demonstrating competitive performance. The ablation studies are providing insightful analyses into the model's components.\", \"**Clarity**: The paper is well-organized, with clear explanations and visual aids that effectively convey complex information, making the technical content accessible to readers.\", \"**Significance**: BLIP-3-Video's efficiency in handling video data with fewer resources.\"], \"weaknesses\": [\"**Diversity of Datasets**: The experiments primarily rely on a limited set of public benchmarks for evaluation. Expanding the evaluation to include a more diverse range of benchmarks, particularly those with varying lengths and complexities of videos, could provide a more comprehensive assessment of the model's generalizability and robustness.\", \"**Scalability Analysis**: While the paper demonstrates the model's efficiency, there is a lack of analysis on how the model scales with increasing video length and complexity. Future work could benefit from exploring the model's performance as it processes longer videos, which is crucial for real-world applications.\", \"**Comparison with State-of-the-Art**: Although comparisons are made with other models, the paper could benefit from a more detailed analysis comparing the trade-offs between BLIP-3-Video and the state-of-the-art models in terms of accuracy, computational resources, and inference time.\", \"**Implementation Details**: Some aspects of the model's implementation, such as the specific choices made in the architecture of the temporal encoder, could be elaborated upon with more technical depth. This additional detail would aid other researchers in understanding the design decisions and potentially replicating or improving upon them.\"], \"questions\": \"1. **Temporal Encoder Generalization**: A smaller number of visual tokens is particularly important for understanding longer videos. However, this paper only tested on a few simple short video benchmarks. Please provide test results on video benchmarks of different lengths and scenarios, such as VideoMME, MVBench, etc.\\n\\n2. **Scalability Concerns**: How does the model's performance and efficiency scale with longer videos, and have you observed any limitations in terms of the number of frames the model can effectively process?\\n\\n3. **Model Interpretability**: The paper mentions the use of different types of temporal encoders. Are there any plans to provide insights into how these encoders make decisions?\\n\\n4. **Comparison with Other Efficient Models**: How does BLIP-3-Video compare with other recent models that also focus on efficiency, such as those employing knowledge distillation or sparse attention mechanisms? Could the authors provide some insights into the trade-offs involved? Please provide persuasive evidence from experimental studies.\\n\\n5. **Novelty**: Compared to LLaMA-VID, where is the core novelty of this paper? Although the experimental results show that 32 tokens achieved better performance on four short video benchmarks, this standard will change with different video lengths, video scenarios, and the complexity of question answering. The scalability and generalizability of this method are questionable. Perhaps a more effective mechanism for accommodating more frames and selecting key information for video question answering from a large number of visual tokens is worth exploring, rather than determining the specific numerical value of a visually overfitted token on a few benchmarks. Similar architectures have been explored enough in a series of works such as Video-LLaVA, LLaVA-VID, LLaVA-NEXT, and so on.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the feedback.\\n\\nWe are very glad to know that all of the concerns have been comprehensively addressed. The suggestion for a new model with dynamic token number is a great idea worth exploring in the future, and we will certainly include its discussion in the final version of the paper.\\n\\nPlease let us know if there are any further things we can provide or clarify. We are a bit lost as the score remains borderline acceptance even after addressing all the concerns.\"}", "{\"title\": \"Response to reviewer 7c93 (3/4)\", \"comment\": \"Q1.\\n> \\\"Temporal Encoder Generalization: A smaller number of visual tokens is particularly important for understanding longer videos. However, this paper only tested on a few simple short video benchmarks. Please provide test results on video benchmarks of different lengths and scenarios, such as VideoMME, MVBench, etc.\\\"\\n\\nFollowing the suggestion from the reviewer, we evaluated BLIP-3-Video on MVBench. Please find the table in the above post (Answer 1). As discussed above, we find BLIP-3-Video quite competitive in this benchmark. While having a much smaller model size of 3.9B (i.e., using less computational resources) and utilizing much less visual tokens (i.e., faster inference), it performed comparably to the SOTA models. Among the models not trained with MVBench-provided training set, it ranked the 2nd, only next to the LLaVA-OneVision using 100x more visual tokens. Compared to other models using 32 tokens like LLaMA-VID-7B, it performed significantly better despite using a smaller LLM (3.9B vs. 7B).\\n\\n\\nQ2.\\n> \\\"Scalability Concerns: How does the model's performance and efficiency scale with longer videos, and have you observed any limitations in terms of the number of frames the model can effectively process?\\\"\\n\\nThanks for the suggestion. Since the number of frames our model takes is a hyperparameter, we are able to train a model that takes more frames as an input. In order to confirm that our model has a capability to digest a larger number of frames and still abstract each video into 32 (or 128) tokens, we trained BLIP-3-Video with 16 frames.\\n\\nThe below table shows the trend.\\n\\n| # frames | # tokens | NExT-QA | ActivityNet-QA |\\n| --- | --- | --- | --- | \\n| 8 frames | 32 tokens | 76.4 | 55.7 / 3.5 |\\n| 8 frames | 128 tokens | 77.1 | 56.7 / 3.6 |\\n| 16 frames | 32 tokens | 76.7 | 55.9 / 3.5 |\\n| 16 frames | 128 tokens | 77.6 | 57.3 / 3.6 |\\n\\nEven while maintaining the number of tokens, we are able to observe that providing more frames in the input allows BLIP-3-Video to scale to better performance. We believe this is due to the fact that increasing the number of frames has an effect of increasing the size of the \\\"pool\\\" of tokens the temporal encoder can select from. We believe this trend (i.e., our model accuracy increasing as the number of frames increases) will continue until it saturates. \\n\\nFor very long videos (e.g., hours), it is beyond the scope of this paper, and it will require additional (hierarchical) LLM mechanisms similar to LLovi, LangRepo, or VideoTree. Such frameworks are complementary to ours, and we focus on best capturing temporal information over shorter (~2 min) video segments, which could be combined within such frameworks as also discussed a bit in the above post (Answer 2).\\n\\n\\nQ3.\\n> \\\"Model Interpretability: The paper mentions the use of different types of temporal encoders. Are there any plans to provide insights into how these encoders make decisions?\\\"\\n\\nThanks for the suggestion. Our plan is to release the open source code with the final version of the paper. We will make the code include a visualization module for the attention layers, so that the user can check which visual tokens are being selected by which temporal encoder.\"}", "{\"comment\": \"We thank the reviewer very much for the constructive feedback and understanding the contributions of this work.\"}", "{\"title\": \"Response to reviewer yYZF (2/3)\", \"comment\": \"> 2. \\u201cEvaluation Benchmarks: The evaluated benchmarks are unconvincing for assessing Video LMMs. The model was only evaluated on MSVD-QA, MSRVTT-QA, ActivityNet-QA, TGIF-QA, and NExT-QA, which are not so ideal for testing LMMs. The authors may consider newer benchmarks like VideoMME and MVBench, which are proposed for assessing Video LMMs.\\u201d\\n\\nFollowing the suggestion from the reviewer, we evaluated our model on MVBench. The below table compares BLIP-3-Video with other state-of-the-art (SOTA) models on MVBench. \\n\\n| Model | # tokens | {VideoChat2-IT training} | Accuracy |\\n| --- | --- | :---: | --- |\\n| PLLaVA (7B) | 576+ | Y | 46.6 |\\n| VideoLLaMA2 (7B) | 1152 | Y | 54.6 |\\n| ST-LLM (7B) | 256 | ~Y | 54.9 |\\n| PPLLaVA (7B) | 1024 | ~Y | 59.2 |\\n| VideoChat2-Mistral (7B) | 96 | Y | 60.4 |\\n| Kangaroo (8B) | ~10000 | Y | 61.1 |\\n| Tarsier (7B) | 4608+ | ~Y | 62.6 |\\n| | | | |\\n| VideoChatGPT (7B) | 264+ | N | 32.7 |\\n| VideoLLaMA (7B) | 32 | N | 34.1 |\\n| VideoChat (7B) | 32 | N | 35.5 |\\n| LLaMA-VID (7B) | 32 | N | 41.4 |\\n| Video-LLaVA (7B) | 2048 | N | 43.5 |\\n| mPLUG-Owl3 (8B)| n/a | N | 54.5 |\\n| **BLIP-3-Video (3.9B)** | 32 | N | 54.9 |\\n| LLaVA-OneVision (7B) | 3136 | N | 56.7 |\\n\\n\\nAmong the models which did not directly use MVBench-provided training dataset (i.e., VideoChat2-IT), BLIP-3-Video performs the 2nd only after LLaVA-OneVision which is bigger and uses almost ~100x more visual tokens. BLIP-3-Video achieves decent results given its smaller model size (3.9B vs. 7B+) and fewer visual tokens (32 vs. 3000+). \\n\\nWe also hypothesize that training BLIP-3-Video with VideoChat2-IT, which is the instruction tuning data introduced by the MVBench paper, would further enhance its performance, since VideoChat2-IT contains datasets like CLEVRER which also exists in MVBench; many SOTA models achieving high results on MVBench did train their models using VideoChat2-IT. In the table, \\u2018Y\\u2019 denotes that the model was trained on VideoChat2-IT, whereas \\u2018N\\u2019 denotes that the model was not trained on VideoChat2-IT. \\u2018~Y\\u2019 means the model\\u2019s paper does not explicitly say they used VideoChat2-IT, but their training recipe shows the majority of VideoChat2-IT was actually used (e.g., CLEVRER, Kinetics-710 & SthSthV2, WebVid, EgoQA, YouCook2, etc.).\"}", "{\"title\": \"Response to reviewer 7c93 (2/4)\", \"comment\": \"2.\\n> \\\"Scalability Analysis: While the paper demonstrates the model's efficiency, there is a lack of analysis on how the model scales with increasing video length and complexity. Future work could benefit from exploring the model's performance as it processes longer videos, which is crucial for real-world applications.\\\"\\n\\nThe focus of this paper has been on capturing temporal information over shorter (~2 min) video segments. What we show in the paper is that our model with the temporal encoder mechanism enables efficient capturing of information in video segments (with temporal changes) compared to prior work. What we confirm is that it is a better/more efficient way to handle a video clip.\\n\\nHandling longer videos is an important research area, and we can offer the following insights. In order to handle long videos (e.g., movies), additional (hierarchical) LLM mechanisms similar to LLovi, LangRepo, or VideoTree would be very useful. Such framework design is orthogonal to our work; our model can serve as the VLM component within those types of frameworks, enabling extraction of better video information from local/short segments that higher-level LLM will combine for the long video modeling.\\n\\n3. \\n> \\\"Comparison with State-of-the-Art: Although comparisons are made with other models, the paper could benefit from a more detailed analysis comparing the trade-offs between BLIP-3-Video and the state-of-the-art models in terms of accuracy, computational resources, and inference time.\\\"\\n\\nThank you for the suggestion. In our paper, we have provided figures and tables to show them. We show (1) the trade-off between model accuracy and the number of visual tokens, which is a good proxy for inference time (i.e., more tokens directly suggest heavier compute). Figure 1 (left) as well as Tables 1 and 2 include such information. The actual inference time depends on the hardware being used, and we find the number of visual tokens to be a fair proxy for the comparison.\\n\\nIn addition, we included (2) the trade-off between model accuracy and model size, which serves as a good proxy for the computational resources. The larger the model is, the more computational resources are required. Figure 1 (right) as well as the model sizes we specified in Tables 1 and 2 correspond to this information.\"}", "{\"title\": \"Response to reviewer gZaZ (2/2)\", \"comment\": \"> 3. \\\"Relating with the weakness 1, extra modules introduced besides the visual encoder (SigLIP) and LLM sound too complicated. If I understand correctly, there are a perceiver-resampler and a temporal encoder (attention pooling or TTM). My idea is naive and simple, can we just finetune a perceiver-resampler in BLIP-3 into a temporal encoder, rather than just compressing tokens per frame? Given the strong performance of cross attention layers in the perceiver resampler, this seems to be a missing but promising ablation study in this paper.\\\"\\n\\nWe agree that Perceiver-Resampler adds one more layer of complication. Its existence is solely because it is part of the pre-trained BLIP-3 model we build on top of, and we are directly inheriting it. The concept of temporal encoder we demonstrate in the paper would be independent of the existence of the Perceiver-Resampler.\\n\\nFollowing the suggestion from the reviewer, we ran an additional ablation. We implemented the model only using/finetuning Perceiver-Resampler in BLIP-3, making it also serve as the temporal encoder. In this version, the Perceiver-Resampler compresses tokens across all the frames, and there is no separate spatio-temporal pooling or TTM.\\n\\nThe table below shows the results. 128 tokens were used in this table for a fair comparison, as the Perceiver-Resampler was pre-trained and fine-tuned to extract 128 tokens. We observe that fine-tuning Perceiver-Resampler as a temporal encoder provides much worse performance compared to our temporal encoders.\\n\\n| Temporal encoder | TGIF-QA | ActivityNet-QA | NExT-QA |\\n| ------------------------------ | ----------- | ----------- | --------- | \\n| Perceiver-Resampler | 72.46 / 4.13 | 52.61 / 3.38 | 76.44 |\\n| Ours (Attentional Pooling) | 76.90 / 4.29 | 56.94 / 3.56 | 76.27 |\\n| Ours (Grouped TTM) | 77.10 / 4.31 | 56.66 / 3.56 | 77.07 |\\n\\n\\n> Q1. \\\"Can one BLIP-3-Video model produce 32 and 128 tokens on demand as a hyper parameter? Or they are different models trained individually?\\\"\\n\\nWe do need to train a different model separately if the number of target visual tokens changes. More dynamic selection with a single model will be an interesting future work. We will clarify this in the paper.\\n\\n> Q2. \\\"What does text encoder in Fig 2 mean? The text tokenizer?\\\"\\n\\nYes, it\\u2019s the text tokenizer. Apologizes for the confusion, and we will revise it.\"}", "{\"title\": \"Post-decision comment\", \"comment\": \"This paper, which got 8(accept)-6(borderline accept)-6(borderline accept)-5(borderline reject) scores got rejected.\\n\\nAlthough we respect the decision of the AC, we feel a bit sad that the paper is rejected despite having strong technical contributions to the field (supported by the reviewers), because of one remaining under-explained opinion of just a single reviewer.\\n\\nIt is particularly so, if all 4 reviewers agreed that the paper has meaningful technical observations by introducing a small video-VLM model with an extremely small number of visual tokens. Further, 3 out of 4 reviewers agreed that the paper presents important technical contributions to the research area introducing the extensive use of temporal encoders for visual tokens in the video. Notice that, as mentioned in our interactions with the reviewers and acknowledged by them, most of the existing video-based VLMs entirely lack the use of temporal encoders on top of image-level visual encoders, setting aside extending advanced sequential models like Token Turning Machines like what we do in this paper.\\n\\nWe emphasize once more that all the concerns raised by the reviewers (mentioned in the AC comment) were addressed during our rebuttal, except for the difference in opinion with yYZF, which was given to us without much justification or details.\"}", "{\"title\": \"Response to reviewer gZaZ (1/2)\", \"comment\": \"We thank the reviewer for the comments. Please find our answers below.\\n\\n> 1. \\\"The presentation of the main method (Sec 2.2) somwhat presents confusion: Does BLIP-3-Video both use spatio-temporal attentional pooling and TTM? Is there a perceive resampler before temporal encoder in BLIP-3-Video (cannot be infered from Figure 2)?\\\"\\n\\nWe apologize for the confusion. We have two different versions of the model architecture: one that uses spatio-temporal attentional pooling, and the other that uses TTM as the temporal encoder. \\n\\nFor the TTM version, we clarify that the TTM itself has the final \\u201coutput\\u201d operation layer in it (by its design), which we implement using a TokenLearner (spatio-temporal attentional pooling). As a result, the TTM encoder naturally contains spatio-temporal attentional pooling within it. We will improve the figure to clarify this better, and thank you for the suggestion.\\n\\nIn both the versions, yes, Perceiver-Resampler exists before the temporal encoder. We are inheriting it from the BLIP-3 model. Perceiver-Resampler generates 128 tokens per frame, and our temporal encoder is applied on top of it to map such 128 * T tokens into 32 (or 128) tokens total. We will revise Figure 2 to clarify this further.\\n\\n> 2. \\\"Compressing a video into 32 tokens is a compelling and exciting idea. However, I am worried that spatial-temporal details will be missing through compression, which is crucial for some detailed reasoning in LLMs. More evaluation of BLIP-3-Video on diverse tasks beyond captioning and MCQ are encouraged. (also, as the compression is not text query guided, the compression is solely dominated by the visual information itself. That is to say, 32 tokens per a video are fixed under different text query, which might not be appropriate in general)\\\"\\n\\nYes, we do understand the concern and we thank the reviewer for asking this.\\n\\nThe focus of this paper is on the learning of compact *visual* representation prior to their interaction with text. It will have pros and cons. One advantage would be that this allows answering multiple questions without having to re-compute the visual tokens.\\n\\nFollowing the suggestion regarding the diverse tasks, we further evaluated BLIP-3-Video on TempCompass benchmark, in order to test it on more various tasks other than video captioning and MCQ, TempCompass is particularly useful as it has two different types of evaluations in addition to MCQ and captioning: \\\"yes/no QA\\\" and \\\"caption matching\\\". We tested BLIP-3-Video on these two tasks. Also notice that TempCompass has some explicit temporal reasoning questions, such as the \\u201cevent order\\u201d and \\u201cspeed\\u201d questions in the dataset.\\n\\nWe find BLIP-3-Video quite competitive in this benchmark, particularly considering its size (3.9B) and the number of visual tokens (often less than 1/20 of the others). Notably, we performed quite a bit better than LLaMA-VID-7B, which also uses 32 visual tokens like ours (and has a larger LLM).\\n\\n| Model | Yes/No QA | Caption matching |\\n| --- | --- | --- | \\n| GPT-4o | 73.66 | 80.84 |\\n| Qwen2-VL-7B-Instruct | 72.77 | 77.31 |\\n| Gemini-1.5-pro | 70.32 | 77.45 |\\n| LLaVA-OneVision-Qwen-2-7B | 69.67 | 73.79 |\\n| LLaVA-NeXT-Video-32B-Qwen | 69.38 | 76.51 |\\n| InternVL2-8B | 68.24 | 77.11 | \\n| **BLIP-3-Video (3.9B)** | 66.7 | 66.5 |\\n| Llama-3-VILA1.5-8B | 63.64 | 68.93 |\\n| LongVA-7B | 62.13 | 65.67 |\\n| LLaVA-NeXT-Video-7B-DPO | 61.19 | 63.01 |\\n| VideoChat2-vicuna-stage3 | 58.01 | 53.69 |\\n| LLaVA-1.5-13B | 56.38 | 64.27 |\\n| Video-LLaVA-7B | 56.38 | 63.34 | 63.34 |\\n| Video-LLaMA-2-13B | 53.73 | 54.16 |\\n| LLaMA-VID-7B-short-video | 52.96 | 56.02 |\\n\\nWe also tested BLIP-3-Video on another benchmark, MVBench, and got a similar observation.\\n\\n| Model | # tokens | {VideoChat2-IT training} | Accuracy |\\n| --- | --- | :---: | --- |\\n| PLLaVA (7B) | 576+ | Y | 46.6 |\\n| VideoLLaMA2 (7B) | 1152 | Y | 54.6 |\\n| ST-LLM (7B) | 256 | ~Y | 54.9 |\\n| PPLLaVA (7B) | 1024 | ~Y | 59.2 |\\n| VideoChat2-Mistral (7B) | 96 | Y | 60.4 |\\n| Kangaroo (8B) | ~10000 | Y | 61.1 |\\n| Tarsier (7B) | 4608+ | ~Y | 62.6 |\\n| | | | |\\n| VideoChatGPT (7B) | 264+ | N | 32.7 |\\n| VideoLLaMA (7B) | 32 | N | 34.1 |\\n| VideoChat (7B) | 32 | N | 35.5 |\\n| LLaMA-VID (7B) | 32 | N | 41.4 |\\n| Video-LLaVA (7B) | 2048 | N | 43.5 |\\n| mPLUG-Owl3 (8B)| n/a | N | 54.5 |\\n| **BLIP-3-Video (3.9B)** | 32 | N | 54.9 |\\n| LLaVA-OneVision (7B) | 3136 | N | 56.7 |\\n\\nAlso note that NExT-QA, which we already included in the submitted paper, has some explicit temporal ordering questions.\"}", "{\"title\": \"Response to reviewer hUbh (2/3)\", \"comment\": \"> 2. \\\"The experiments of the paper focus on video question-answering benchmarks only, and this limited experimentation may not capture the model's ability in other video-based tasks. Further evaluation on other video tasks, such as temporal understanding would demonstrate the applicability of this approach to more general and diverse video-related tasks.\\\"\\n\\nWe thank the reviewer for the suggestion. We also want to mention that we evaluated the model on a video caption task (in addition to VQA), and reported the results in Section 3.4.\\n\\nIn addition, we newly evaluated BLIP-3-Video on TempCompass, in order to test it on more various tasks other than video captioning and MCQ, TempCompass is particularly useful as it has two different types of evaluations in addition to MCQ and captioning: \\\"yes/no QA\\\" and \\\"caption matching\\\". We tested BLIP-3-Video on these two tasks. Also notice that TempCompass has some explicit temporal reasoning questions, such as the \\u201cevent order\\u201d and \\u201cspeed\\u201d questions in the dataset.\\n\\nWe find BLIP-3-Video quite competitive in this benchmark, particularly considering its size (3.9B) and the number of visual tokens (often less than 1/20 of the others). Notably, we performed quite a bit better than LLaMA-VID-7B, which also uses 32 visual tokens like ours (and has a bigger LLM).\\n\\n| Model | Yes/No QA | Caption matching |\\n| --- | --- | --- | \\n| GPT-4o | 73.66 | 80.84 |\\n| Qwen2-VL-7B-Instruct | 72.77 | 77.31 |\\n| Gemini-1.5-pro | 70.32 | 77.45 |\\n| LLaVA-OneVision-Qwen-2-7B | 69.67 | 73.79 |\\n| LLaVA-NeXT-Video-32B-Qwen | 69.38 | 76.51 |\\n| InternVL2-8B | 68.24 | 77.11 | \\n| **BLIP-3-Video (3.9B)** | 66.7 | 66.5 |\\n| Llama-3-VILA1.5-8B | 63.64 | 68.93 |\\n| LongVA-7B | 62.13 | 65.67 |\\n| LLaVA-NeXT-Video-7B-DPO | 61.19 | 63.01 |\\n| VideoChat2-vicuna-stage3 | 58.01 | 53.69 |\\n| LLaVA-1.5-13B | 56.38 | 64.27 |\\n| Video-LLaVA-7B | 56.38 | 63.34 | 63.34 |\\n| Video-LLaMA-2-13B | 53.73 | 54.16 |\\n| LLaMA-VID-7B-short-video | 52.96 | 56.02 |\\n\\n\\nWe also tested BLIP-3-Video on another benchmark, MVBench, and got a similar observation.\\n\\n| Model | # tokens | {VideoChat2-IT training} | Accuracy |\\n| --- | --- | :---: | --- |\\n| PLLaVA (7B) | 576+ | Y | 46.6 |\\n| VideoLLaMA2 (7B) | 1152 | Y | 54.6 |\\n| ST-LLM (7B) | 256 | ~Y | 54.9 |\\n| PPLLaVA (7B) | 1024 | ~Y | 59.2 |\\n| VideoChat2-Mistral (7B) | 96 | Y | 60.4 |\\n| Kangaroo (8B) | ~10000 | Y | 61.1 |\\n| Tarsier (7B) | 4608+ | ~Y | 62.6 |\\n| | | | |\\n| VideoChatGPT (7B) | 264+ | N | 32.7 |\\n| VideoLLaMA (7B) | 32 | N | 34.1 |\\n| VideoChat (7B) | 32 | N | 35.5 |\\n| LLaMA-VID (7B) | 32 | N | 41.4 |\\n| Video-LLaVA (7B) | 2048 | N | 43.5 |\\n| mPLUG-Owl3 (8B)| n/a | N | 54.5 |\\n| **BLIP-3-Video (3.9B)** | 32 | N | 54.9 |\\n| LLaVA-OneVision (7B) | 3136 | N | 56.7 |\"}", "{\"summary\": \"This paper presents an efficient Video LLM, coined as BLIP-3-Video, by incorporating extra modules to transform dense video tokens into sparse tokens (e.g., # can be 32). Specifically, the authors use a sequential transformer (so called token turing machine) on top of frame-wise image tokens and perceiver-resampler to produce limited 32 tokens. Diverse video QA benchmarks show the competitive performance of BLIP-3-Video on video question answering tasks. The authors also test the captioning ability of BLIP-3-Video.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper investigates an important topic, how to efficiently & effectively understand videos by LLMs, which is underexplored so far. This paper proposes the BLIP-3-Video to understand videos with just 32 tokens (in LLMs) based on Phi-3, and it outperforms both parameter-heavy or visual-token-heavy models (as shown in Fig 1) on QA and captioning benchmarks. Additionally, this paper presents a compelling finding (somewhat): a video can be effectively represented by just 32 tokens in LLMs for QA and captioning tasks. I think this research line is promising and can benefit several downstreaming tasks, e.g., captioning for text-to-video generation.\", \"weaknesses\": \"Although this is overall a good paper, several concerns are here:\\n1. The presentation of the main method (Sec 2.2) somwhat presents confusion: Does BLIP-3-Video both use spatio-temporal attentional pooling and TTM? Is there a perceive resampler before temporal encoder in BLIP-3-Video (cannot be infered from Figure 2)?\\n\\n2. Compressing a video into 32 tokens is a compelling and exciting idea. However, I am worried that spatial-temporal details will be missing through compression, which is crucial for some detailed reasoning in LLMs. More evaluation of BLIP-3-Video on diverse tasks beyond captioning and MCQ are encouraged.\\n(also, as the compression is not text query guided, the compression is solely dominated by the visual information itself. That is to say, 32 tokens per a video are fixed under different text query, which might not be appropriate in general)\\n\\n3. Relating with the weakness 1, extra modules introduced besides the visual encoder (SigLIP) and LLM sound too complicated. If I understand correctly, there are a perceiver-resampler and a temporal encoder (attention pooling or TTM). My idea is naive and simple, can we just finetune a perceiver-resampler in BLIP-3 into a temporal encoder, rather than just compressing tokens per frame? Given the strong performance of cross attention layers in the perceiver resampler, this seems to be a missing but promising ablation study in this paper.\", \"questions\": \"1. Can one BLIP-3-Video model produce 32 and 128 tokens on demand as a hyper parameter? Or they are different models trained individually?\\n\\n2. What does text encoder in Fig 2 mean? The text tokenizer?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer yYZF (3/3)\", \"comment\": \"> Q1. \\\"What novel designs does this method introduce compared to TTM? Are there ablation studies for these designs?\\\"\\n\\nThe table below (also mentioned in the above post) shows the ablations comparing different extensions of the TTM. As we are able to observe, the original TTM performs poorly, and the time-stamped positional encoding and formulating the grouped version enables much better results.\\n\\n| Temporal encoder | MSVD-QA | TGIF-QA | ActivityNet-QA | NExT-QA |\\n|-----------------------------------------------|-----------|-----------|---------|----------|\\n| Original TTM | 76.42 / 4.15 | 75.80 / 4.26 | 54.45 / 3.48 | 75.42 |\\n| TTM + time-stamp | 76.43 / 4.16 | 76.44 / 4.29 | 56.15 / 3.53 | 75.96 |\\n| TTM + grouping | 76.99 / 4.17 | 77.05 / 4.30 | 55.92 / 3.54 | 76.46 |\\n| Ours (time-stamp + grouping) | 77.29 / 4.18 | 77.10 / 4.31 | 56.66 / 3.56 | 77.07 |\\n\\n\\n\\n> Q2. \\\"The model utilizes the VideoChatGPT instruction set. Why hasn't it been evaluated on that benchmark?\\\"\\n\\nFollowing the suggestion from the reviewer, we evaluated our model on the VideoInstruct benchmark.\\n\\n| Model | VideoInstruct accuracy |\\n| --- | --- |\\n| PLLaVA-34B | 3.32 |\\n| SlowFast-LLaVA-34B | 3.32 |\\n| VideoGPT+ | 3.28 |\\n| ST-LLM-7B | 3.15 |\\n| **BLIP-3-Video (3.9B)** | 3.11 |\\n| VideoChat2_HD_mistral | 3.10 |\\n| LITA-13B | 3.04 |\\n| LLaMA-VID-13B | 2.99 |\\n| VideoChat2 | 2.98 |\\n| LLaMA-VID-7B | 2.89 |\\n| Video-ChatGPT | 2.38 |\\n\\nIt shows a similar trend to the experiments with other datasets. We believe BLIP-3-Video performs very reasonably on this benchmark, considering its smaller size and its use of much fewer visual tokens. Notably, we performed quite a bit better than LLaMA-VID-7B, which also uses 32 visual tokens like ours (and has a bigger LLM).\"}", "{\"title\": \"Response to reviewer hUbh (3/3)\", \"comment\": \"> Q1. \\\"The (video part of the) training of this model is on video captioning data and video question-answering datasets. If the downstream task were to change to a more complex task, like temporal reasoning, would the model require more tokens or would 16~32 still be sufficient? i.e. is there enough visual information encoded in the 16-32 tokens?\\\"\\n\\nWe agree that whether 32 tokens would be sufficient is an open question. There is a good chance that very long and complicated videos may require more frames and tokens. However, we would like to clarify that our observation from this paper is expected to still hold: the current video-based VLM models (without a proper temporal encoder) have too many visual tokens, and (intelligently) reducing them to 1/20 does not harm the accuracy and only makes it more efficient. For more complicated tasks requiring more frames, we believe all of the models will need to increase the number of tokens for the best accuracy. Simultaneously, BLIP-3-Video is likely to always require less number of tokens compared to the naive video VLMs without the temporal encoder, relatively. For the benchmarks we tested, we were able to confirm such behavior.\\n\\n\\n\\n> Q2. \\\"In addition, if the downstream task requires remembering multiple details and nuanced events over a long diverse scenario, how would this approach perform? Is there a built-in mechanism that prevents information loss during token pooling?\\\"\\n\\nYes, our temporal encoder (e.g., TTM) has the 'memory' mechanism in it, and it is expected to learn to select/preserve important tokens that best benefit the training task. If the training data with such property is provided, the temporal encoder will be optimized that way. Obviously, if there are too many important tokens, it will reach its capacity. (The size of the memory in TTM was 512 tokens, which is before they are finally pooled to 32 or 128.)\\n\\nThe focus of this paper has been on capturing temporal information over shorter (~2 min) video segments. What we show in the paper is that our model with the temporal encoder mechanism enables efficient capturing of information in video segments compared to prior work. What we claim is that it is a better/more efficient way to handle a video clip.\\n\\nThe very long videos are outside the scope of this research work. However, this is an important research area, and we can offer the following insights. In order to handle long videos (e.g., movies), additional (hierarchical) LLM mechanisms similar to LLovi, LangRepo, or VideoTree would be very useful. Such framework design is orthogonal to our work; our model can serve as the VLM component within those types of frameworks, enabling extraction of better video information from local/short segments that higher-level LLM will combine for the long video modeling.\\n\\nWe will clarify and discuss this further in the final version of the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Please ensure that all discussion details and additional experimental results are presented exactly as they are in the final paper, along with the complete code being open-sourced.\"}" ] }
CKXul9iX77
A Deep Generative Learning Approach for Two-stage Adaptive Robust Optimization
[ "Aron Brenner", "Rahman Khorramfar", "Jennifer Z Sun", "Saurabh Amin" ]
Two-stage adaptive robust optimization (ARO) is a powerful approach for planning under uncertainty, balancing first-stage decisions with recourse decisions made after uncertainty is realized. To account for uncertainty, modelers typically define a simple uncertainty set over which potential outcomes are considered. However, classical methods for defining these sets unintentionally capture a wide range of unrealistic outcomes, resulting in overly-conservative and costly planning in anticipation of unlikely contingencies. In this work, we introduce AGRO, a solution algorithm that performs adversarial generation for two-stage adaptive robust optimization using a variational autoencoder. AGRO generates high-dimensional contingencies that are simultaneously adversarial and realistic, improving the robustness of first-stage decisions at a lower planning cost than standard methods. To ensure generated contingencies lie in high-density regions of the uncertainty distribution, AGRO defines a tight uncertainty set as the image of "latent" uncertainty sets under the VAE decoding transformation. Projected gradient ascent is then used to maximize recourse costs over the latent uncertainty sets by leveraging differentiable optimization methods. We demonstrate the cost-efficiency of AGRO by applying it to both a synthetic production-distribution problem and a real-world power system expansion setting. We show that AGRO outperforms the standard column-and-constraint algorithm by up to 1.8% in production-distribution planning and up to 8% in power system expansion.
[ "robust optimization", "stochastic optimization", "discrete optimization", "deep learning", "unsupervised learning" ]
Accept (Poster)
https://openreview.net/pdf?id=CKXul9iX77
https://openreview.net/forum?id=CKXul9iX77
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ytsLmZTiij", "ynwArfALtB", "uUCzaoeazl", "qAu13bMwZn", "pFpxYvOTIu", "mjGdVj5cWL", "i2UdoFR7XT", "epev6s4EnO", "dO2aRnW11O", "cfRFCH0LiV", "bcLpSNLptw", "YQtFuMZsSR", "VfNyzIuC1f", "RMrCFA9V0N", "PjpUZ3l3KL", "GUAHJcjGZn", "FXEjdflaxb", "CmKgUvRDzv", "BrlEPcA3GL", "Bo0L2iNzMM", "9Orkt9TjsV", "1FNyrjDWWL", "0owS9u5bHF" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1731405272946, 1732328519225, 1730044838815, 1732375463998, 1737524166580, 1732375621613, 1730772669346, 1732328258057, 1732328713687, 1732328154996, 1733166075355, 1732378598070, 1732378543750, 1733161857257, 1732327985421, 1732327869598, 1731020948163, 1732379717666, 1732328738423, 1732375593111, 1732378565432, 1732327663848, 1735467491117 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12099/Reviewer_vrQ2" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Reviewer_GwvX" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Reviewer_A31W" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Reviewer_A31W" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Reviewer_foZ7" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Authors" ], [ "ICLR.cc/2025/Conference/Submission12099/Area_Chair_ziyt" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces AGRO, a novel method for two-stage adaptive robust optimization (ARO) using a variational autoencoder (VAE) to generate adversarial and realistic uncertainty sets. The authors demonstrate that AGRO reduces planning costs in ARO tasks, outperforming classical approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed AGRO framework is innovative, embedding a VAE within a column-and-constraint generation (CCG) scheme to achieve high-dimensional adversarial generation with cost efficiency.\\n2. It seems that the empirical results highlight an over 10% cost reductions over classical methods.\", \"weaknesses\": \"1.\\tThe introduction lacks a comprehensive motivation for using a VAE for uncertainty sets over other generative models. The authors should justify why a VAE was chosen and discuss the potential advantages over alternatives, like GANs or normalizing flows, which may also be suitable.\\n2.\\tThe discussion on the choice of the VAE bottleneck dimension (parameter L) could be expanded. The authors should provide more insight into how different L values affect the uncertainty set\\u2019s coverage and the balance between computational cost and model fidelity.\\n3.\\tWhile the experiments are detailed, there is no mention of computational time for VAE training or comparison with other ARO solutions. Including such results would enhance transparency about AGRO\\u2019s feasibility in larger-scale applications.\\n4.\\tThe paper does not explore alternative formulations for the adversarial subproblem. A comparison with different optimization methods or a discussion on the limitations of projected gradient ascent could further clarify AGRO's robustness.\", \"questions\": \"1.\\tWhy did you choose a VAE over other generative models (e.g., GANs, normalizing flows) for constructing uncertainty sets in AGRO? Would these models offer any advantages or limitations compared to VAEs in this application?\\n2.\\tHow does the bottleneck dimension (L) influence the overall performance and reliability of AGRO? Could you elaborate on any trade-offs between computational cost and uncertainty set coverage as L varies?\\n3.\\tThe paper discusses using VAE-based uncertainty sets to achieve tighter approximations. Could you clarify how you ensure these sets are both realistic and adversarial? Are there any specific quantitative or qualitative metrics that assess the accuracy of these generated uncertainty sets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Comparison of AGRO and CCG with other methods for ARO\", \"comment\": \"Several methods are available for solving ARO problems, but the cost outcomes of these methods are always bounded below by those obtained with CCG, which is an exact solution method for ARO with linear recourse. This holds true for both traditional approaches, such as linear decision rules [1], and more recent machine learning-assisted ARO techniques [2, 3], both of which approximate recourse decisions as functions of uncertain parameters in order to reduce computational runtime at the cost of optimality. Other exact solution methods, such as the cut-generation method proposed in [4], are also applicable but ultimately produce the same planning outcomes as CCG.\\n\\nTo be concrete, AGRO is contrasted with previous works on learning-assisted ML [2,3] in that AGRO does not use an ML model to predict recourse costs and instead exactly computes the the recourse problem in each iteration. While this restricts AGRO's applicability to problems with linear recourse (in contrast to [2], which can address more general recourse problems involving mixed-integer variables), it offers a significant advantage. AGRO operates as an exact solution method that only requires training on a dataset of realizations, bypassing the need to generate datasets of realizations paired with potentially costly-to-compute recourse costs.\\n\\nUltimately, the core advantage of our approach lies in its ability to reduce costs by avoiding overly conservative planning associated with loose uncertainty sets. For these reasons, we do not compare our method with other ARO solution techniques, as these methods cannot outperform CCG in terms of minimizing planning costs.\", \"references\": \"[1] Kuhn, Daniel, Wolfram Wiesemann, and Angelos Georghiou. \\\"Primal and dual linear decision rules in stochastic and robust optimization.\\\" Mathematical Programming 130 (2011): 177-209.\\n\\n[2] Dumouchelle, Justin, et al. \\\"Neur2RO: Neural two-stage robust optimization.\\\" The Twelfth International Conference on Learning Representations. 2023.\\n\\n[3] Thiele, Aur\\u00e9lie, Tara Terry, and Marina Epelman. \\\"Robust linear optimization with recourse.\\\" Rapport technique (2009): 4-37.\"}", "{\"summary\": \"This paper proposes the AGRO algorithm, which performs adversarial generation for two-stage adaptive robust optimization using a variational autoencoder. By decomposing the optimization problem to solving the 'main' problem and the adversarial subproblem iteratively, AGRO can provide tighter uncertainty estimation and lead to better optimization outputs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written, easy to follow, and understand.\\n\\n2. With VAE-learned uncertainty, the proposed AGRO method does tighten the uncertainty bounds and leads to better optimization outcomes. An intuitive example in Figure 2 and experimental results clearly demonstrate this.\", \"weaknesses\": \"1. In Section 3.2, the author proposes a projected gradient ascent heuristic method to optimize $q$. Although this PGA method is well-explained in the article and I understand why the author uses it, I still expect an ablation study on directly optimizing $q$ to show if PGA could still guarantee some level of optimization quality and if there are any speed improvements.\\n\\n2. Although the proposed AGRO method is an improvement based on the CCG method, in the experiments section, the author should compare with more baselines for the two-step optimization problem, which I believe is a well-studied problem with many methods proposed to solve it.\", \"questions\": \"My main question for this work is: Is the optimization problem the author wants to solve exactly a linear optimization problem (see Eq. 1, 7, 9, 10)? If so, there are already many tools for solving linear optimization problems, so why is the proposed method better than those?\\n\\nIf not, what kind of optimization problem does AGRO solve? Only convex?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarifying Figure 1\", \"comment\": \"We appreciate the reviewer's suggestion to refine the description of Figure 1 to convey the nature of the main problem better. We revised the description to include a reference to Equation 3 in Section 2.1 and changed the sentence \\\"First-stage decisions $\\\\boldsymbol{x}^*$ are obtained by solving a main problem for a finite set of uncertainty realizations, $\\\\mathcal{S}$\\\" to \\\"First-stage decisions $\\\\boldsymbol{x}^*$ are obtained by solving a main problem, which approximates the original ARO uncertainty set $\\\\mathcal{U}$ by a finite scenario set $\\\\mathcal{S}$ (see Eq. (3))\\\".\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Additional Experimental Details\", \"comment\": \"We appreciate the reviewer's suggestion to include additional experimental details to enhance the completeness and reproducibility of our work. For Figure 3 (left), we will provide the standard deviations from the 50 previously conducted experimental trials in the revised manuscript. Regarding Table 1, we plan to perform additional experiments for the capacity expansion study to obtain more robust estimates of average costs, runtimes, and their respective standard deviations. Furthermore, we will expand the Computational Details subsections in Appendix B.1 and B.2 to include more comprehensive information about the VAE architecture and training setup.\"}", "{\"summary\": \"This paper addresses the two-stage adaptive robust optimization (ARO) problem, where a key challenge is constructing an effective uncertainty set. The authors propose using a deep generative model to learn the uncertainty set, aiming to avoid overly conservative optimization. The method is evaluated on a synthetic production-distribution problem and a regional power system expansion problem.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and clear.\\n\\n2. Leveraging deep generative models to learn the uncertainty set is a promising approach.\", \"weaknesses\": \"1. The Projected Gradient Ascent (PGA) method does not guarantee convergence to the worst-case uncertainty realization. Although the authors propose randomly initializing PGA with different samples of $z$ for empirical performance, providing some theoretical analysis on the approximation error would be beneficial.\\n\\n2. The performance improvement of the proposed method is minimal.\", \"questions\": \"1. The paper suggests that the framework is general and could also be applied to diffusion models. However, diffusion model training involves matching the score of the noised distribution, and samples cannot be easily obtained during training. Could you elaborate on how diffusion models would integrate with your proposed framework?\\n\\n2. How do you ensure the uncertainty set learned by the VAE is sufficiently tight? When optimizing over the latent space, are there any constraints? If not, is there a risk that the algorithm could select an overly conservative worst-case realization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Comparison of PGA with other methods\", \"comment\": \"We kindly refer the reviewer to our response for Reviewer vrQ2 entitled \\\"Comparison of PGA with other methods\\\".\"}", "{\"title\": \"Question regarding applicability of AGRO\", \"comment\": \"AGRO is designed for two-stage adaptive robust optimization (ARO) problems with linear recourse, where $\\\\min_{y \\\\in \\\\mathcal{Y}(\\\\boldsymbol{x}, \\\\boldsymbol{\\\\xi})} \\\\boldsymbol{d}(\\\\boldsymbol{\\\\xi})^\\\\top \\\\boldsymbol{y}$ is a linear optimization problem. Two-stage ARO is generally nonconvex due to its min-max-min structure. Specifically, while the innermost problem $\\\\min_{y \\\\in \\\\mathcal{Y}(\\\\boldsymbol{x}, \\\\boldsymbol{\\\\xi})} \\\\boldsymbol{d}(\\\\boldsymbol{\\\\xi})^\\\\top \\\\boldsymbol{y}$ is a linear program, its optimal objective value is convex in $\\\\boldsymbol{\\\\xi}$ [1]. This implies that the inner max-min problem is a convex maximization problem, which is generally challenging to solve. Consequently, one must rely on iterative solution algorithms, such as column-and-constraint generation (CCG) and cut-generation algorithms [2], or simplifying approaches, such as linear decision rules [3], which reformulate the original nonconvex problem into a tractable single-stage problem.\\n\\nAlthough we do not discuss this extension in the submission, AGRO can also be used to solve nonlinear single-stage robust optimization problems of the form:\\n\\\\begin{align*}\\n \\\\min_{\\\\boldsymbol{x} \\\\in \\\\mathcal{X}} \\\\quad & \\\\boldsymbol{c}^\\\\top \\\\boldsymbol{x} \\\\quad \\\\mathrm{s.t.} \\\\quad f(\\\\boldsymbol{x}, \\\\boldsymbol{\\\\xi}) \\\\leq \\\\boldsymbol{0}, \\\\ \\\\forall \\\\boldsymbol{\\\\xi} \\\\in \\\\mathcal{U},\\n\\\\end{align*}\\nwhere $f$ is convex in $\\\\boldsymbol{x}$ and differentiable in $\\\\boldsymbol{\\\\xi}$. This generalization can be achieved using a constraint generation approach, similar to CCG. In this method, AGRO generates adversarial realizations $\\\\boldsymbol{\\\\xi}$ by maximizing $f(\\\\boldsymbol{x}, \\\\boldsymbol{\\\\xi})$ for a given $\\\\boldsymbol{x}$, which are then added to a finite scenario set.\\n\\nBuilding on its ability to address the challenges of two-stage ARO, AGRO\\u2019s core methodology also offers a promising foundation for broader applications in robust optimization. By leveraging deep unsupervised learning to efficiently identify adversarial scenarios, AGRO contributes a novel framework for tackling high-dimensional uncertainty in optimization. While this work emphasizes two-stage ARO, we envision future efforts extending AGRO's generative framework to new optimization paradigms. This includes advancing risk estimation for large-scale systems and identifying out-of-sample or out-of-distribution contingencies. These applications highlight AGRO\\u2019s potential to drive innovation in uncertainty-aware optimization through the integration of generative methods and robust optimization principles.\\n\\n[1] Bertsimas, Dimitris, et al. \\\"Adaptive robust optimization for the security constrained unit commitment problem.\\\" IEEE transactions on power systems 28.1 (2012): 52-63.\\n\\n[2] Thiele, Aur\\u00e9lie, Tara Terry, and Marina Epelman. \\\"Robust linear optimization with recourse.\\\" Rapport technique (2009): 4-37.\\n\\n[3] Kuhn, Daniel, Wolfram Wiesemann, and Angelos Georghiou. \\\"Primal and dual linear decision rules in stochastic and robust optimization.\\\" Mathematical Programming 130 (2011): 177-209.\"}", "{\"title\": \"Comparison of PGA with other methods\", \"comment\": \"Indeed, a comparison with alternative algorithms is useful for benchmarking the robustness and performance of AGRO. In this context, we attempted to solve the bilinear formulation of the adversarial subproblem (see Equation 7 in Appendix A.2) for the capacity expansion problem using Gurobi's nonconvex optimizer. Additionally, we explored precomputing upper bounds for the variables $\\\\boldsymbol{z}^{(\\\\ell)}$ and $\\\\tilde{\\\\boldsymbol{z}}^{(\\\\ell)}$ as suggested in [1] to enhance scalability. Despite these efforts, we were unable to obtain certificates of optimality for the subproblem within a reasonable time. For example, after one hour of computation, the solver reported an optimality gap of 400\\\\%. Given the high computational cost of solving this subproblem in each iteration of CCG, we decided to discontinue this approach.\\n\\nIn response to feedback from reviewers requesting comparisons between PGA and other solution methods, we revisited this approach, aiming to reduce runtimes by providing a warm start for the bilinear formulation using the solution obtained via PGA. Unfortunately, even with this enhancement, we could not achieve a provably optimal solution for a single iteration of the adversarial subproblem within the one-hour time limit.\\n\\nFor completeness, we are generating additional results from applying Gurobi's nonconvex optimizer to solve the adversarial subproblem for the production distribution problem, which is considerably lower dimensional (and perhaps more tractable) than the capacity expansion problem. The final submission will include findings regarding the optimality of solutions obtained with PGA compared to provably optimal solutions, should they be achievable. In the case that provably optimal solutions are not achievable (as was the case for the capacity expansion study), the final submission will include a brief summary of these findings.\", \"references\": \"[1] Fischetti, Matteo, and Jason Jo. \\\"Deep neural networks and mixed integer linear optimization.\\\" Constraints 23.3 (2018): 296-309.\"}", "{\"comment\": \"Thank you for your response. As the paper primarily relies on empirical performance without providing any theoretical guarantees and is evaluated on only two datasets, I believe it is a borderline paper and will maintain my score.\"}", "{\"title\": \"Applicability of diffusion models\", \"comment\": \"We thank the reviewer for raising this point. To clarify, our proposed framework does not involve generating samples during the training phase of the generative model. Instead, the generative model is first trained on the dataset of uncertainty realizations and then used to generate adversarial realizations during the optimization phase. During this phase, the only requirement is that the generated adversarial realizations be differentiable with respect to the inputs of the generative model. As such, we believe diffusion models can be integrated into the AGRO framework.\\n\\nIn the revised version of the manuscript, we explicitly clarify that the optimization phase operates on fully trained generative models and does not assume sample generation during their training.\"}", "{\"title\": \"Theoretical guarantees for PGA\", \"comment\": \"Thank you for the reviewer\\u2019s thoughtful feedback on performance guarantees. Our primary objective was to develop a learning-based approach to reduce planning costs associated with overly conservative strategies that resulting from \\\"loose\\\" uncertainty sets (e.g., polyhedral, elliptical, etc.). To achieve this, we proposed a learning and optimization framework that generates nonconvex uncertainty sets, recognizing that this comes at the expense of theoretical guarantees due to the bilinear (and inherently nonconvex) nature of the resulting subproblem.\\n\\nProviding theoretical guarantees for approximation error is inherently challenging, as determining the value of the global minimum -- let alone the optimal solution -- for a nonconvex optimization problem is a provably NP-hard task [1]. Given these limitations, we rely on experimental validation to demonstrate the effectiveness of AGRO. Our results underscore the practical advantages of this approach in greatly reducing costs for the widely studied problem of long-term energy system planning with real-world supply and demand data.\\n\\nFor completeness, we are generating additional results from applying Gurobi's nonconvex optimizer to solve the adversarial subproblem for the production distribution problem, which is considerably lower dimensional (and perhaps more tractable) than the capacity expansion problem. The final submission will include findings regarding the optimality of solutions obtained with PGA compared to provably optimal solutions, should they be achievable. In the case that provably optimal solutions are not achievable as was the case for the capacity expansion study (see our response to Reviewer vrQ2 entitled\\\"Comparison of PGA with other methods\\\"), the final submission will include a brief summary of these findings.\", \"references\": \"[1] Danilova, Marina, et al. \\\"Recent theoretical advances in non-convex optimization.\\\" High-Dimensional Optimization and Probability: With a View Towards Data Science. Cham: Springer International Publishing, 2022. 79-163.\"}", "{\"title\": \"Final note to reviewers\", \"comment\": \"As we approach the conclusion of the discussion period, we invite reviewers to share any final questions or comments for the authors before proceeding with their evaluations. Below is an overview of new findings and major revisions made to the final submitted manuscript:\\n\\n### Revised Deterministic Parameters in Section 4\\nWe have updated the deterministic parameters used to define the capacity expansion problem in Section 4, enhancing the realism of the capacity expansion model. This adjustment is reflected in the latest experimental findings, where the case of $L=4$ now yields the lowest cost across all methods instead of $L=2$. This shift highlights the influence of ramping and net-load volatility on operational and overall planning costs. The revised parameters also resulted in a less tractable bilinear subproblem for CCG, leading to a sixfold increase in runtime. These refinements provide a more faithful representation of capacity expansion planning considerations, ultimately strengthening our experimental results.\\n\\n### New Subsection in Section 4.2\\nWe have added a subsection titled \\\"Bottleneck Dimension\\\" to Section 4.2. This subsection connects insights on the role of the bottleneck dimension $L$ from the production-distribution case study to those from the capacity expansion planning case study. Additionally, it includes a paragraph offering recommendations for selecting $L$.\\n\\n### Updates to Section 3.1\\nThe second paragraph of Section 3.1 has been revised to highlight the potential advantages of GANs, normalizing flows, and diffusion models in generating higher-fidelity samples compared to VAEs. We then justify the use of VAEs by referencing experimental findings in Section 4, which show that higher generative fidelity does not necessarily lead to better performance concerning the ARO objective.\\n\\n### Expanded Computational Experiment Details\\nAdditional details have been included regarding the computational experiments. Specifically, Figure 3 (left) and Table 1 now report standard deviations across all trials. Furthermore, Appendices B.1 and B.2 have been expanded to provide more comprehensive information about the architecture and training process for the VAEs.\\n\\n### Comparison of PGA with Direct Solve\\nFinally, we note that the final submission does not include a complete comparison of projected gradient ascent (PGA) with the direct solve approach for the mixed-binary bilinear formulation of the AGRO subproblem (see Appendix A.2). However, our experimental findings can be summarized as follows:\\n\\n- For the production-distribution problem with $|\\\\mathcal{J}| \\\\in \\\\{3,6\\\\}$, PGA obtained solutions to the adversarial subproblem that were, on average, within 1\\\\% of optimality as determined by the direct solve.\\n\\n- For the capacity expansion planning problem, a full cost comparison could not be provided as the direct solve failed to achieve provably optimal solutions within a reasonable timeframe. Specifically, the solver reported a 400\\\\% optimality gap after one hour.\\n\\n***\\n\\n### Thank you!\\nWe are grateful for your thoughtful feedback throughout this process. We look forward to your final evaluations and appreciate your valuable contributions to improving this manuscript.\"}", "{\"title\": \"Ensuring accuracy of generated uncertainty sets\", \"comment\": \"We appreciate the reviewer\\u2019s concern regarding the ability of our approach to ensure that the generated uncertainty sets are appropriately adversarial and realistic. While we acknowledge that our findings are not necessarily generalizable to all problem settings, we emphasize that theoretical guarantees are outside the scope of this work. Instead, our focus lies on experimentally demonstrating the efficacy of the proposed approach for reducing planning costs in ARO.\\n\\nTo evaluate the realism and coverage of the generated uncertainty sets, we employ both qualitative and quantitative assessments of the generative model's performance. Quantitatively, we rely on standard metrics such as precision, density, recall, and coverage scores [1] to evaluate the diversity and fidelity of the VAE. Qualitatively, we compare visualized samples generated by the model to true observations, providing additional evidence of the realism of the scenarios produced.\\n\\nWhile these methods do not offer explicit guarantees on the coverage of the generated uncertainty sets, our experimental results strongly suggest that AGRO generates appropriately adversarial realizations. Specifically, our capacity expansion study demonstrates that AGRO yields robust approximations of the 95\\\\% chance constraint in all tested cases except $L=2$. This conclusion is substantiated by Fig. 5 in Appendix B.2.1, where the worst-case costs estimated by AGRO consistently exceed the sample-based estimates -- used as proxies for the true objective -- except for $L=2$. Furthermore, Fig. 4 shows that worst-case realizations derived from the AGRO uncertainty set are more realistic compared to those obtained using classical uncertainty sets.\\n\\nWe recognize the importance of providing a balanced and transparent discussion of these findings. To address this, Section 4 of \\nthe final manuscript will include revisions that make the above conclusions more explicit alongside discussion of best practices for choosing $L$ (see our response entitled \\\"Role of bottleneck dimension\\\").\", \"references\": \"[1] Naeem, Muhammad Ferjad, et al. \\\"Reliable fidelity and diversity metrics for generative models.\\\" International Conference on Machine Learning. PMLR, 2020.\"}", "{\"title\": \"Role of bottleneck dimension\", \"comment\": \"We thank the reviewer for their insightful observation. Below, we elaborate on our findings, which we are working to incorporate as revisions to the main text.\\n\\nIn our experiments, we observed that the optimal choice of bottleneck dimension depended on the problem setting. Interestingly, lower-dimensional bottlenecks sometimes resulted in reduced costs despite yielding worse diversity and fidelity metrics. We hypothesize that this behavior arises due to two factors:\\n\\n1. Uncertainty set-based approximations of chance-constrained programs are inherently conservative. These sets aim to capture a region containing 95\\\\% probability mass, rather than the region containing the least adversarial realizations, which would provide the tightest approximation of the chance constraint.\\n\\n2. Reduced fidelity can lead to less adversarial realizations, thereby lowering costs driven by over-conservatism. For example, a VAE with a low-dimensional bottleneck may fail to achieve the desired 95\\\\% coverage, but the reduced coverage offsets the conservative nature of the uncertainty set approximation, ultimately leading to lower planning costs.\\n\\nThis phenomenon was evident in our capacity expansion planning experiments, where the \\\"smoothing\\\" effect of a VAE with $L=2$ resulted in lower planning costs compared to higher-dimensional bottlenecks ($L > 2$), which produced more volatile load profiles. A similar trend was observed in the production distribution problem. For a low-dimensional setting ($|\\\\mathcal{J}|=3$), the VAE with $L=1$ achieved lower costs compared to $L=4$. However, in a higher-dimensional setting ($|\\\\mathcal{J}|=12$), the VAE with $L=4$ outperformed $L=1$. In this regime, low fidelity led to higher costs due to underconservatism, in contrast to the lower-dimensional case.\\n\\nThese findings indicate that the bottleneck dimension $L$ should not be optimized solely for diversity and fidelity metrics. Instead, $L$ can be tuned to balance these metrics with the performance of first-stage decisions on a held-out set. In data-scarce settings, prioritizing satisfactory diversity and fidelity metrics may be preferable, emphasizing uncertainty set coverage over potentially nonrobust planning cost estimates. Ultimately, this tradeoff requires users to consider a combination of quantitative generative model metrics, qualitative sample evaluations, and performance on the downstream ARO objective.\\n\\nIn response to reviewer comments regarding the role of the VAE bottleneck dimensionality, the final manuscript will include revisions to Section 4 that (1) consolidate observations from both case studies to make the above conclusions regarding the impact of $L$ more explicit and (2) recommend best practices for choosing $L$.\"}", "{\"summary\": \"This paper presents a novel deep generative approach using Variational Autoencoders (VAE) to tackle two-stage adaptive robust optimization (ARO) under high-dimensional uncertainty. Traditional ARO approaches for constructing the uncertainty set $\\\\mathcal{U}$ tend to be overly conservative, often leading to excessive resource allocation in scenarios with high-dimensional and irregularly distributed uncertainties. The proposed AGRO method mitigates this issue by incorporating VAE embedding and column-and-constraint generation (CCG). The method also uses projected gradient ascent to solve the formulated subproblem. Experiments on two problems demonstrate the advantages of this method over conventional CCG approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The primary contribution of this work is the innovative application of VAE to construct a tighter uncertainty set, thereby reducing over-conservatism in high-dimensional decision-making, which is then addressed through CCG. The paper is clearly presented, with informative visuals such as Figure 2, and well-organized notation and formulations. The experimental results highlight the promise of the proposed method.\", \"weaknesses\": \"The proposed approach involves training a VAE, whose performance might be sensitive to hyperparameters, computational resources, and the amount of available training data. Additional experiments and discussion could enhance the paper\\u2019s applicability. Please see the following questions for further details.\", \"questions\": [\"In Figure 1, the authors use two 3D visualizations to illustrate the two-stage operations on $\\\\mathcal{U}$ and $\\\\mathcal{Z}$. Could the authors provide a brief description of the specific main problem addressed here to give the audience a better understanding?\", \"In the experiment on the production-distribution problem, the authors observed a reverse effect of the bottleneck dimension in the low-dimensional case of $|\\\\mathcal{J}|=3$. Could the authors elaborate on possible reasons for this?\", \"Based on the experiments, could the authors provide practical guidelines for selecting the appropriate bottleneck dimension for VAEs according to the dimensionality or complexity of the uncertainty set?\", \"For tabular results such as those in Figure 3 (left) and Table 1, could the authors also report the standard deviation across trials? This would help readers understand the robustness of AGRO in different practical scenarios.\", \"Could the authors provide more details on the VAE architecture and training settings used in each experiment? Such as layer dimensions, normalization, optimizer, and learning rate, for better reproducibility.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Tightness of AGRO uncertainty sets\", \"comment\": \"While we do not provide formal theoretical guarantees, we believe that the uncertainty sets learned by AGRO are naturally tight due to the design of the generative model. For VAEs in particular, the training process penalizes reconstruction error, which discourages the generation of unrealistic samples and ensures that reconstructed outputs align with the typical set of the target distribution. In our experiments, we observe this behavior through (1) out-of-sample evaluations of planning costs, where tighter uncertainty sets result in lower costs while still satisfying the chance constraint, and (2) visual comparisons of worst-case realizations from AGRO and CCG, which show that AGRO produces significantly more realistic adversarial scenarios.\\n\\nTo address the concern about overly conservative worst-case realizations, we explicitly constrain the optimization over the latent space to $\\\\boldsymbol{z} \\\\in \\\\mathcal{Z}$ (see Figure 1). This constraint ensures that the selected adversarial scenarios remain realistic and prevents the algorithm from yielding overly conservative solutions.\"}", "{\"title\": \"VAE training time\", \"comment\": \"Regarding training times for the VAEs, we provide results for the production distribution study in the Computational Details subsection of Appendix B.1 and for the capacity expansion study in Tab. 1 in Section 4.2.\"}", "{\"title\": \"Role of bottleneck dimension\", \"comment\": \"Regarding the reviewer's comment on providing practical guidelines for choosing the bottleneck dimension, we first kindly refer the reviewer to our response to Reviewer vrQ2, entitled ``Role of bottleneck dimension.'' Building on this, we recommend selecting the bottleneck dimensionality by considering: (1) quantitative generative model metrics (e.g., density and coverage), (2) qualitative evaluations of generated samples (e.g., visual inspection), and (3) cost estimates associated with the downstream ARO objective.\\n\\nSpecifically, when a large amount of data is available to obtain robust out-of-sample estimates of planning costs, we suggest selecting $L$ to be as small as possible while still achieving a reliable approximation of the chance constraint. This can be done by comparing $q(\\\\boldsymbol{\\\\xi}, \\\\boldsymbol{x}^*)$ for $\\\\boldsymbol{\\\\xi}$ obtained from the adversarial subproblem against sample estimates $\\\\hat{F}^{-1}(\\\\alpha; \\\\boldsymbol{x}^*)$ (see Fig.~5 in Appendix B.2.1). Conversely, in settings where out-of-sample evaluations of planning costs are difficult to obtain (e.g., if data is limited or solving the recourse problem is costly), we recommend selecting $L$ to achieve satisfactory diversity and fidelity metrics. For example, $L$ should be chosen as small as possible while ensuring that density scores remain above a minimum threshold, such as $0.9$, to achieve a safer approximation of the chance constraint.\\n\\nThe reviewer also raises an insightful point regarding the observed reversed effect of bottleneck dimensionality for $|\\\\mathcal{J}| = 3$. We attribute this to the same phenomenon that led to lower planning costs for $L = 2$ in the capacity expansion study. Specifically, reduced coverage of the uncertainty set (evident for $L = 1$ when $|\\\\mathcal{J}| = 3$) offsets the inherently conservative nature of the uncertainty set's approximation of chance constraints, ultimately resulting in lower planning costs. However, as $|\\\\mathcal{J}|$ increases, models with $L = 4$ outperform $L = 1$ as low fidelity and insufficient coverage lead to increased costs due to underconservatism. We elaborate on this phenomenon in our comment entitled ``Role of bottleneck dimension'' in response to Reviewer vrQ2.\\n\\nIn light of several reviewers' comments regarding the role of the VAE bottleneck dimension -- both generally and with regard to the experimental results of the production distribution problem -- we are currently revising Section 4 of the text to more clearly convey these findings in relation to observations from both experimental studies.\"}", "{\"title\": \"Performance improvement of AGRO\", \"comment\": \"Our computational experiments show promising performance of AGRO as we achieve up to 10\\\\% reduction in total costs, which is considered substantial in the context of regional energy system planning; in our capacity expansion study, a 10\\\\% savings amounts to billions of dollars (see Table 1).\"}", "{\"title\": \"Motivation for using VAEs\", \"comment\": \"We chose VAEs over other generative methods such as GANs and normalizing flows due to their relatively high training stability and low computational cost for sampling. These characteristics make VAEs particularly well-suited for integration within the AGRO framework. This modeling choice is detailed in Section 3.1. While GANs and normalizing flows can offer potential advantages, particularly in generating higher-fidelity samples in high-dimensional settings, we found that these methods presented notable challenges. Specifically, during preliminary experiments on the capacity expansion case study, neither GANs nor normalizing flows produced samples with significantly better fidelity than the VAE. Moreover, these methods were more challenging to train and, in the case of normalizing flows, slower with regard to generating samples. Consequently, we decided against conducting a comprehensive comparison between these approaches.\\n\\nHowever, we acknowledge that VAEs may encounter limitations in generating high-fidelity samples in more complex, high-dimensional settings. To reflect this, the revised version of Section 3.1 offers a more balanced discussion of the trade-offs and limitations of VAEs compared to other generative methods. Specifically, we add that very high-dimensional settings may necessitate the use of alternative generative modeling approaches, such as GANs or normalizing flows, that are known to achieve higher fidelity than VAEs.\"}", "{\"metareview\": \"This paper introduces AGRO, a two-stage adaptive robust optimization (ARO) method that combines variational autoencoder (VAE) within a column-and-constraint generation for adversarial uncertainty sets. The authors show that AGRO reduces planning costs compared to classical ARO approaches.\\n\\nThis is indeed a borderline paper. Although the proposed method achieves significant cost reduction, the exploiting generative model in learning for decision making and RL is not novel, and the choice of VAE seems not well justified.\", \"additional_comments_on_reviewer_discussion\": \"The authors clarified most questions raised by the reviewers. Most of the reviewers acknowledge the contribution of the paper and achieved an agreement, recognizing the paper as borderline.\"}" ] }
CJnceDksRd
DRL: Decomposed Representation Learning for Tabular Anomaly Detection
[ "Hangting Ye", "He Zhao", "Wei Fan", "Mingyuan Zhou", "Dan dan Guo", "Yi Chang" ]
Anomaly detection, indicating to identify the anomalies that significantly deviate from the majority normal instances of data, has been an important role in machine learning and related applications. Despite the significant success achieved in anomaly detection on image and text data, the accurate Tabular Anomaly Detection (TAD) has still been hindered due to the lack of clear prior semantic information in the tabular data. Most state-of-the-art TAD studies are along the line of reconstruction, which first reconstruct training data and then use reconstruction errors to decide anomalies; however, reconstruction on training data can still hardly distinguish anomalies due to the data entanglement in their representations. To address this problem, in this paper, we propose a novel approach Decomposed Representation Learning (DRL), to re-map data into a tailor-designed constrained space, in order to capture the underlying shared patterns of normal samples and differ anomalous patterns for TAD. Specifically, we enforce the representation of each normal sample in the latent space to be decomposed into a weighted linear combination of randomly generated orthogonal basis vectors, where these basis vectors are both data-free and training-free. Furthermore, we enhance the discriminative capability between normal and anomalous patterns in the latent space by introducing a novel constraint that amplifies the discrepancy between these two categories, supported by theoretical analysis. Finally, extensive experiments on 40 tabular datasets and 16 competing tabular anomaly detection algorithms show that our method achieves state-of-the-art performance.
[ "Anomaly detection", "Tabular data", "Tabular representation learning" ]
Accept (Poster)
https://openreview.net/pdf?id=CJnceDksRd
https://openreview.net/forum?id=CJnceDksRd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z4OW3TOgBe", "vLfN6WYYYZ", "sLNDMFfOm5", "rnEN406qWc", "pHCT16E5vM", "nk9HqNflHl", "lvJseRN0IB", "g7wsw1MsCN", "g5mEEYG65c", "fzLArLwD1K", "eZWBmrU8Bt", "bqm15Yw0tw", "aMncItWqbP", "UfQzat7HWf", "TDNJIMrJAF", "T3Ll5d6gLQ", "QOTSzftFSS", "P9TYfPFIMf", "Nc5sGy5u7Q", "MQEvLUl0f0", "LdCi5CS0Ih", "LXZi9cy6Xm", "Fs30aDfWkn", "98lLcWKRG0", "82b07kGRVG", "7zry2MpGTL", "7VmjPdATTj", "4pJJvPQeda", "4henUp6MTN", "44XAJuy9sS", "3uLOjPqCHN", "3Lss7PHGXe" ], "note_type": [ "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732455510156, 1730607555163, 1732877144841, 1737523711563, 1732455918901, 1732705859127, 1734621000632, 1732455425276, 1732455758014, 1732455841750, 1732583804750, 1732798135476, 1732456098562, 1732583874302, 1732583989762, 1732584031822, 1733131123806, 1732691506806, 1732692800776, 1732456231062, 1732456031737, 1732456155475, 1730596626643, 1730653526676, 1730138070650, 1733114625402, 1732678866818, 1732455662030, 1732456187764, 1732455970911, 1733114274664, 1732455585814 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Reviewer_hbkx" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Area_Chair_Aoqt" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Reviewer_TJDb" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Reviewer_Sgt1" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Reviewer_Sgt1" ], [ "ICLR.cc/2025/Conference/Submission5518/Reviewer_hqNX" ], [ "ICLR.cc/2025/Conference/Submission5518/Reviewer_TJDb" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Reviewer_hbkx" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ], [ "ICLR.cc/2025/Conference/Submission5518/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response (Part 2/4)\", \"comment\": \"W3: Unclear description of Section 2 and MCM limitations.\", \"r_to_w3\": \"Thanks for your insightful suggestion! We apologize for any confusion and we have made improvements to the clarity of Section 2 in revised version for better illustration.\\n\\n(1) Standard reconstruction-based approaches consider learning a mapping $A(\\\\cdot;\\\\Theta):\\\\text{R}^D \\\\to \\\\text{R}^D$ to minimize the reconstruction loss within $\\\\mathcal{D}_{train}$, where $D$ is the number of input features. Typically, $A(\\\\cdot;\\\\Theta)$ first maps the sample from observation space to latent space, and then maps it back to the observation space to obtain the reconstruction of the sample. The parameters $\\\\Theta$ are optimized by minimizing the reconstruction loss on normal training samples. \\n\\n(2) Based on reconstruction, MCM employs a learnable masking strategy to the input and aims to reconstruct normal samples well with access to only unmasked entries of the input, where how to produce effective masks is challenging in this field.\\nNPT-AD incorporates both feature-feature and sample-sample dependencies to reconstruct masked features of normal samples by utilizing the whole training set. Therefore, NPT-AD involves a high computational cost in terms of memory and time, due to its reliance on the training set during inference. \\nDespite the effectiveness of MCM and NPT-AD, both of them design the reconstruction strategies in the observed data space, as illustrated in Section 3.1 in original paper, which usually suffer from the potential data entanglement in observed data space, where we showed the phenomenon on more real world tabular datasets in Fig. 7 of Appendix A.1.\\nIgnoring the issue in anomaly detection can lead to diminished discriminative power between normal and anomalous patterns. To solve the potential data entanglement issue, our proposed DRL introduces representation decomposition in the constrained latent space, where the normal and anomalous patterns are more discriminative. Additionally, DRL exhibits lower computational costs compared to both MCM and NPT-AD as illustrated in Table 8 in Appendix 5 of revised version.\", \"q1\": \"What would happen if the basis vectors were set as unit vectors, for example, [1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]?\", \"r_to_q1\": \"Thanks for your comment! Following your suggestion, we include experiments as follows to evaluate the impact of using unit vectors as basis vectors in DRL.\\nThe results indicate that initializing with unit vectors does not perform as well as the default basis vector generation method in DRL.\\nIn our approach, the reconstructed representation is computed as a linear combination of orthogonal basis vectors: $\\\\tilde{\\\\mathbf{h}} = \\\\sum_{k=1}^Kw^k\\\\beta_k$, where the basis vectors are defined by $\\\\mathcal{B} = \\\\\\\\{\\\\beta_k\\\\\\\\}_{k=1}^K \\\\in \\\\text{R}^{K\\\\times E}$ with $K < E$. By default, the $K$ and $E$ is set to 5 and 128 respectively. If the basis vectors were set as unit vectors, the reconstructed representation $\\\\tilde{\\\\mathbf{h}}$ would become highly sparse, containing $E-K$ zero entries in its $E$-dimensional space. The main objective of DRL is to minimize the decomposition loss $d(\\\\mathbf{h}, \\\\tilde{\\\\mathbf{h}})$ (Eq.3), where $d$ is the distance measurement and $\\\\mathbf{h}$ is the representation extracted by feature extractor $f$. However, this configuration could lead to inefficient optimization, as $\\\\mathbf{h}$ would need to be overly sparse to match $\\\\tilde{\\\\mathbf{h}}$. \\n\\n\\n| | Backdoor | Fault | Imgseg | Lympho | Pendigits | Vowels | Wbc | Average of 40 data |\\n|---------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|--------------------|\\n| Unit vector | 0.6794 | 0.6175 | 0.9143 | 0.9484 | 0.8327 | 0.4381 | 0.8911 | 0.6688 |\\n| **DRL (ours)** | **0.8915** | **0.6649** | **0.9238** | **1.0000** | **0.9360** | **0.4506** | **0.9742** | **0.7344** |\"}", "{\"summary\": \"This paper proposes a new method for tabular anomaly detection under the one-class setting via learning decomposed representations of normal samples. The key idea is to decompose normal representations into a weighted linear combination of data-free and training-free orthogonal basis vectors. Furthermore, a separation loss, supported by theoretical analysis, is designed to enhance model\\u2019s discriminative ability between normal and anomaly samples. Extensive experiments conducted on 40 datasets demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis paper is well-written and easy to read.\\n2.\\tThe proposed decomposed representation learning method sounds reasonable for tabular anomaly detection.\\n3.\\tThe authors provide a theoretical analysis to support the proposed separation loss.\\n4.\\tThe comparative experiments conducted on 40 tabular datasets are quite extensive.\", \"weaknesses\": \"1.\\tI appreciate that this paper highlights the issue of normal and anomaly samples being entangled in raw space. However, the argument that normal and anomaly latent representations are entangled in reconstruction-based methods, leading to diminished discriminative power, seems somewhat unfounded. Since these models are biased towards reconstructing normal samples, it is expected that representations of the two classes would be entangled in latent space. Based on this, the motivation for learning decomposed normal representations is unclear to me. What are the specific advantages of learning decomposed representations for tabular anomaly detection?\\n2.\\tThe visualized t-SNE results demonstrate that the proposed method can extract non-entangled normal and anomaly representations. However, the authors may need to explain the rationale for leveraging orthogonal basis vectors to learn such representations. Could vectors with other relational properties achieve the same effect? What are the specific advantages of using orthogonal basis vectors for decomposed representation learning?\\n3.\\tI appreciate the authors\\u2019 efforts in conducting extensive ablation studies. However, what is the performance when using cosine distance for the decomposition loss? Additionally, how does the model perform if only alignment losses are used as anomaly scores during inference?\\n4.\\tThe sensitivity analysis of vector number is good. I wonder if the vector number is somehow related to the feature number of input data.\\n5.\\tI appreciate the theoretical support for the separation loss. However, as the separation loss is applied between samples, is it sensitive to the batch size? Additionally, if the training normal samples are highly similar to each other, does this loss face convergence challenges?\\n6.\\tCan the proposed method be applied to other data types (e.g., image or time series data)? In other words, why is the proposed method specifically useful for tabular data.\\n7.\\tThis is a minor point, but the paper lacks a discussion on limitations and future work.\", \"questions\": \"see the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for taking the time to review our response and for your positive feedback! We would like to further explain the remaining issues in T1 and T3.\\n\\nTo improve anomaly detection performance, we actually aim to learn less entangled latent representations without seeing anomaly samples. The way we achieve this includes two aspects. Firstly, we enforce the latent representations of normal observations to a constrained latent space defined by a set of orthogonal and fixed basis vectors, where each representation is decomposed into a weighted combination of the basis vectors (the weight vector is on a probability simplex). Besides, we enforce normal representations to span this constrained subspace by introducing the separation loss (minimizing the similarity between weight vectors of normal samples). Intuitively, the representation of an anomaly sample will be 'squeezed out' from the subspace. That is why our learned encoder can distinguish anomaly samples from normal samples better with separation loss. We will consider providing a theoretical proof to further support this intuition.\\n\\nWe now extend the empirical evidence in Fig.17 in revised version to illustrate the impact of the separation loss on performance, which is available at https://anonymous.4open.science/r/DRL1-A5BB. We can find that the anomaly score (calculated by decomposition loss) of anomalous samples is larger than that of normal samples, especially when we introduce the separation loss (minimizing the similarity between weight vectors of normal samples). We will make it more clearer in our camera ready.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response (Part 3/4)\", \"comment\": \"W4: The sensitivity analysis of vector number is good. I wonder if the vector number is somehow related to the feature number of input data.\", \"r_to_w4\": \"Thank you for your valuable suggestion! We agree that the number of basis vectors may have some relationship with the number of features in the input data. Considering the feature number when constructing the set of basis vectors is indeed a meaningful idea, and we see this as an important direction for our future work.\\n\\nIn our DRL, we remap the sample from raw space to the latent space and enforce the representation of each normal sample in the latent space to be decomposed into a weighted linear combination of randomly generated orthogonal basis vectors. The hidden dimension for representation is fixed across all datasets, thus it is reasonable to use a default number of basis vectors for each dataset.\\nIn our experiments, we set the default number of basis vectors to 5 for all datasets. We also provide a sensitivity analysis in Fig. 9 of Appendix 6 in the original paper, where we show that setting the number of basis vectors to 5 is sufficient to achieve high performance.\", \"w5\": \"I appreciate the theoretical support for the separation loss. However, as the separation loss is applied between samples, is it sensitive to the batch size? Additionally, if the training normal samples are highly similar to each other, does this loss face convergence challenges?\", \"r_to_w5\": \"Thanks for your valuable suggestion!\\n\\n(1) Following your suggestion, we include the sensitivity analysis w.r.t. batch size on AUC-PR, which is added into Fig. 9 (e) of Appendix (A.6) and summarized in table below for your convenience. The results demonstrate that the performance remains robust across different batch sizes. \\n\\n(2) Regarding loss convergence, the primary objective of DRL is to minimize the decomposition loss (Eq. 3 in the original paper), while the separation loss (Eq. 5 in the original paper) serves as an additional constraint. The separation loss is applied to the normal weights using cosine distance, ensuring that the values remain bounded within a small range. Moreover, the loss weight for the separation loss is set to 0.06 by default, which further constrains its range. Additionally, the weights for basis vectors belong to a probability simplex, which prevents cases where all weights become zero during updates. These mechanisms collectively contribute to stable loss convergence. \\nTo verify this, we have added the experimental results in Fig. 18 and 19 in Appendix 10 in the revised version, illustrating the effect of the separation constraint on loss convergence. The results confirm that the separation loss does not negatively impact convergence. \\n\\n\\n| Batch size | Backdoor | Fault | Imgseg | Lympho | Pendigits | Vowels | Wbc | Average of 40 data |\\n|---------------|-----------------|-----------------|-----------------|------------|----------------|-----------------|-----------------|--------------------|\\n| 32 | 0.8692 | 0.65 | 0.909 | 0.9879 | 0.8671 | 0.4312 | **0.9762** | 0.7129 |\\n| 64 | 0.8809 | 0.6321 | 0.926 | 0.9972 | 0.8586 | 0.4465 | 0.9627 | 0.7149 |\\n| 128 | 0.8885 | **0.6699** | **0.9292** | **1** | 0.9012 | 0.4483 | 0.9742 | 0.7295 |\\n| 256 | 0.8718 | 0.639 | 0.9153 | **1** | 0.8931 | 0.4234 | 0.9742 | 0.7167 |\\n| 512 (Default) | **0.8915** | 0.6649 | 0.9238 | **1** | **0.936** | 0.4506 | 0.9742 | **0.7344** |\\n| 1024 | 0.88 | 0.6006 | 0.9287 | **1** | 0.9237 | **0.4674** | 0.9742 | 0.7249 |\"}", "{\"title\": \"Further explanation on W1\", \"comment\": \"We sincerely appreciate your feedback! There might be some misunderstanding and we would like to further explain our motivation.\\n\\nWe agree with you that for existing reconstruction-based methods, the representations of the two classes would be entangled in latent space since these models are biased towards reconstructing normal samples. We also agree that suppose these representations are already learned, further decomposing them cannot help with the OOD detection performance.\\n\\nIn this work, we actually aim to learn less entangled latent representations without seeing anomaly samples. The way we achieve this includes two aspects. Firstly, we enforce the latent representations of normal observations to a constrained latent space defined by a set of orthogonal and fixed basis vectors, where each representation is decomposed into a weighted combination of the basis vectors (the weight vector is on a probability simplex). Besides, we enforce normal representations to span this constrained subspace by introducing the separation loss (minimizing the similarity between weight vectors of normal samples). Intuitively, the representation of an anomaly sample will be 'squeezed out' from the subspace. That is why our learned encoder can distinguish anomaly samples from normal samples better than the encoder of others.\\nTherefore, the representation learning in ours is quiet different from existing reconstruction-based methods. We will make this motivation more clearer in our camera ready.\\n\\nWe now give empirical evidence to illustrate the representation separation between the two classes. \\nLet $\\\\mathbf{w}_n$ and $\\\\mathbf{w}_a$ denote the computed weights of normal and anomalous samples respectively.\\nFig. 16 in Appendix 10 of revised version indicates that as the training progresses, $\\\\text{Var}(\\\\|\\\\mathbf{w}_n\\\\|_2)$ increases, which leads to a corresponding increase in $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_n - \\\\mathbf{w}_a\\\\|_2^2\\\\right]$. This verifies that raising the lower bound effectively enlarges $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_n - \\\\mathbf{w}_a\\\\|_2^2\\\\right]$. In addition, Table 10 in Appendix 10 in revised version illustrates that when the separation loss is applied,\\nthe average distance of anomalous weight $\\\\mathbf{w}_a$ from the center of normal weights, i.e., $\\\\mu _ {\\\\mathbf{w}_n} = \\\\text{E}\\\\left[\\\\mathbf{w}_n\\\\right]$, has a more greater growth than the average distance of normal weight $\\\\mathbf{w}_n$ from $\\\\mu _ {\\\\mathbf{w}_n}$. \\nWe further visualize the T-SNE of the learned representations of encoder w/o and w/ separation loss over all datasets in Fig.13 in Appendix 10 of revised version to verify the representation separation. We can observe that with separation loss, the discriminative distinction between normal and anomalous patterns within the latent space can be enhanced. \\nThis supports the sufficiency of the constraint that anomalous representations are enforced to be distinct from normal representations.\\n\\nTherefore, in such a constrained subspace, we could facilitate the capturing of shared information within normal patterns that distinct from anomalous ones.\"}", "{\"metareview\": \"Based on the reviews, I recommend accepting the paper for its high technical and empirical quality, as well as its significance to the ICLR community. The paper received four reviews, three of which recommend acceptance with high confidence. Notably, one reviewer provided an exceptionally detailed review, offering many constructive suggestions and technical comments. Another reviewer rated the paper a 5 but acknowledged that the authors\\u2019 detailed rebuttal effectively addressed six out of the seven reported weaknesses. As an expert in this topic, I find that the remaining concern is unfounded, with sufficient evidence provided in parts (c) and (g) of Fig. 1 and Figs. 14 and 15 in Appendix 10 to disprove it convincingly.\", \"additional_comments_on_reviewer_discussion\": [\"The main reviewer concerns centered on technical aspects and the key assumptions underlying the proposed method:\", \"**Reviewer hqNX** questioned the lack of important baselines, a loose bound in the separation loss, and a missing discussion on the limitations of the method. The authors\\u2019 detailed rebuttal addressed these points, leading the reviewer to increase their rating to 6.\", \"**Reviewer hbkx** raised doubts about the motivation for decomposing normal representations and the absence of discussion on limitations and future work. Despite added experimental results and discussions, the reviewer remained unconvinced, maintaining a rating below the acceptance threshold.\", \"**Reviewer Sgt1** expressed concerns about decomposing normal samples into orthogonal bases, high-dimensional data challenges, computational costs, and reproducibility. The authors addressed these concerns effectively, resulting in a rating increase to 6.\", \"**Reviewer TJDb** initially questioned the theoretical basis and performance of the method. The authors provided additional statistics and results, leading the reviewer to revise their rating from 3 to 6.\", \"In conclusion, the authors\\u2019 comprehensive rebuttals resolved most concerns. Despite continued reservations of Reviewer hbkx, the overall feedback was positive, with the paper\\u2019s empirical strength and detailed responses being key factors in recommending acceptance.\"]}", "{\"title\": \"Response (Part 1/4)\", \"comment\": \"W1: Lack of KNN Baseline Comparison.\", \"r_to_w1\": \"Thank you for your valuable suggestion! In the revised version of the paper, we have included KNN as a baseline for comparison; please see Fig. 3, Fig. 4 of main text and Table 8 of Appendix 5 in revised manuscript.\\nThe advantage of DRL's design over KNN: Despite the effectiveness of KNN, it computes distances between samples in the observation space, where real-world data often exhibit entanglement between normal and anomalous samples as shown in Fig.1 in main text and Fig.7 of Appendix 1. \\nIgnoring this entanglement in tabular anomaly detection, particularly in the one-class classification setting, usually result in reduced discriminative power between learned normal and anomalous patterns. Our proposed DRL alleviates this issue by introducing a decomposed representation learning framework.\", \"w2\": \"The authors need to explore tighter lower bound or justify the sufficiency of the separation loss; additionally, it would be useful to discuss how it could potentially impact performance.\", \"r_to_w2\": \"Thanks for your valuable suggestion!\\n\\n**(1) Exploring tighter lower bound.** We revise the derivations for this bound, and it is now tighter than the original ones, which is illustrated in Proposition 1 in the revised version. Below, we would clarify the sufficiency of separating normal and anomalous patterns and the impact on performance when applying this separation.\\n\\n**(2) The sufficiency of separating normal and anomalous patterns.** We have included experimental results in Fig. 16 and Table 10 in Appendix 10 of revised version to illustrate that increasing $\\\\text{Var}(\\\\|\\\\mathbf{w}_n\\\\|_2)$ in Proposition 1 can amplify the discrepancy between the normal and anomalous patterns. Let $\\\\mathbf{w}_a$ and $\\\\mathbf{w}_n$ denote the computed weights of anomalous and normal samples respectively. Specifically, Fig. 16 indicates that as the training progresses, $\\\\text{Var}(\\\\|\\\\mathbf{w}_n\\\\|_2)$ increases, which leads to a corresponding increase in $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_n - \\\\mathbf{w}_a\\\\|_2^2\\\\right]$. This verifies that raising the lower bound effectively enlarges $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_n - \\\\mathbf{w}_a\\\\|_2^2\\\\right]$. In addition, Table 10 illustrates that when the separation loss is applied,\\nthe average distance of anomalous weight $\\\\mathbf{w}_a$ from the center of normal weights, i.e., $\\\\mu _ {\\\\mathbf{w}_n} = \\\\text{E}\\\\left[\\\\mathbf{w}_n\\\\right]$, has a more greater growth than the average distance of normal weight $\\\\mathbf{w}_n$ from $\\\\mu _ {\\\\mathbf{w}_n}$. This supports the sufficiency of the proposed separation mechanism.\\n\\n**(3) The impact on performance of this separation.**\\nIt is worth noting that for anomaly detection, the key insight is that a higher anomaly score indicates higher confidence in a sample being anomalous. Therefore, even if the decomposition does not perfectly capture the normal sample distribution, as long as it better represents normal samples than anomalous ones, the model can still achieve accurate anomaly detection.\\nAdditionally, we provide the T-SNE visualization of representation $\\\\mathbf{h}$ by feature extractor $f$ and the reconstructed representation $\\\\tilde{\\\\mathbf{h}}$ by linear combination of basis vectors $\\\\mathbf{w}\\\\mathcal{B}$ over all datasets, as shown in Fig. 12 in Appendix 10 of revised version.\\nWe can observe a notable overlap between $\\\\mathbf{h}$ and $\\\\tilde{\\\\mathbf{h}}$ of normal samples, and a significant distinction between $\\\\mathbf{h}$ and $\\\\tilde{\\\\mathbf{h}}$ of anomalous samples. This demonstrates that compared to anomalous samples, normal sample representations can be better modeled as a mixture of fixed basis vectors.\\nTo further verify the failure of modeling anomalous representations as a mixture of fixed basis vectors, we provide experiment results in Fig. 17 of Appendix 10 in revised version to show that the anomaly score (calculated by decomposition loss in Eq.3 in original paper) of anomalous samples is significantly larger than that without separation loss.\\nThis explains why separating the normal and anomalous patterns can result in performance improvement.\"}", "{\"title\": \"Response (Part 1/4)\", \"comment\": \"W1: The argument that normal and anomaly latent representations are entangled in reconstruction-based methods, leading to diminished discriminative power, seems somewhat unfounded. Since these models are biased towards reconstructing normal samples, it is expected that representations of the two classes would be entangled in latent space. Based on this, the motivation for learning decomposed normal representations is unclear to me. What are the specific advantages of learning decomposed representations for tabular anomaly detection?\", \"r_to_w1\": \"Thanks for your valuable comment! We are sorry for any confusion.\\nWe would like to address your concern from two perspectives.\\n\\n**(1) Why the learned representations may exhibit entanglement in existing methods?** The main objective of anomaly detection in one class classification problem is to model the normal distribution, thus it is expected that the learned normal patterns are distinct from anomalous patterns. As real world data may exhibit observation entanglement between normal and anomalous samples (please refer to Fig. 7 of Appendix A.1 in original paper), using the reconstruction loss on the observed samples to distinguish the anomalous samples from the normal samples might be inefficient.\\nIgnoring observation entanglement in tabular anomaly detection under the one-class classification setting can lead to diminished discriminative power between learned normal and anomalous patterns, as the overlap between normal and anomalous representations within the latent space of deep models may obscure the distinction between them, which is illustrated in (c) and (g) of Fig. 1 in original paper.\\nWe provide additional demonstrations of this phenomenon in Fig. 14 and 15 in Appendix 10 in revised version.\\nWe attribute this challenge to the intrinsic heterogeneity of features in tabular data, which aligns with recent findings [1] indicating that neural networks struggle to distinguish regular and irregular patterns, particularly when faced with a large number of uninformative features in tabular data.\\nThis is one of the reasons why distinguishing normal patterns from anomalous ones remains a challenging task for existing reconstruction-based methods.\\n\\n\\n**(2) What are the advantages of learning decomposed representations for tabular anomaly detection?**\\nThe main objective of decomposed representation learning (DRL) is also to model the normal distribution during training.\\nConsidering the entanglement issue discussed above, we aim to capture the normal patterns in a constrained latent space, where the normal and anomalous patterns are enforced to be more discriminative.\\nThe advantage of learning decomposed representations lies in our ability to capture the shared statistical information within normal patterns, which helps distinguishes them from anomalies. \\nNormal samples, which are drawn from the same distribution, are considered to represent the normal state. Thus, it is reasonable to assume that these normal samples share common statistical properties that distinguish them from anomalies.\\nInspired by techniques from dictionary learning, topic modeling, we can learn the shared information by enforcing that each normal sample's representation is decomposed into a linear combination of shared basis vectors (analogous to topics in topic modeling [2]) with sample-specific weights (analogous to topic proportion in topic modeling).\\nMeanwhile, the separation constraint enforces the normal and anomalous patterns to be more discriminative, thereby facilitating the capturing of shared information within normal patterns.\\n\\n[1] Why do tree-based models still outperform deep learning on typical tabular data? [NeurIPS 2022]\\n\\n[2] A review of topic modeling methods. [Information Systems 2020]\"}", "{\"title\": \"Response (Part 2/4)\", \"comment\": \"W2: The authors may need to explain the rationale for leveraging orthogonal basis vectors to learn such representations. Could vectors with other relational properties achieve the same effect? What are the specific advantages of using orthogonal basis vectors for decomposed representation learning?\", \"r_to_w2\": \"Thanks for your valuable suggestion! We are sorry for any confusion and will explain the advantages in detail.\\n\\n(1) To accurately capture the statistical characteristics of normal samples while distinguishing them from anomalies, it is crucial that the shared basis vectors are sufficiently diverse to encapsulate the global structure of the normal data. \\nTo this end, we eliminate the dependencies among basis vectors by leveraging a set of orthogonal vectors as basis vectors.\\n\\n(2) Given a subspace defined by a set of orthogonal basis vectors, each representation in this space can be expressed as $\\\\mathbf{w}\\\\mathcal{B}$, where $\\\\mathbf{w}$ is the weight vector (denoting the coordinates) and $\\\\mathcal{B}$ represents the fixed set of basis vectors. \\nTo increase the discrepancy between different representations, as discussed in Sec. 3.2.2, we only need to enforce the separation in the weight vectors $\\\\mathbf{w}$ due to the shared basis vectors, i.e., the separation loss (Eq.5 in original paper). It is easier than directly increasing the discrepancy between the representations since we already exclude the shared information for all representations and the dimension of weight vector is extremely low (only 5 for all datasets).\\n\\n(3) Additionally, we have provided a comparison between using random basis vectors and orthogonal basis vectors in Table 1 of the original paper. From our experiments, we observe that the orthogonalization of basis vectors is crucial for effectively capturing normal patterns that distinguish them from anomalies.\", \"w3\": \"What is the performance when using cosine distance for the decomposition loss? Additionally, how does the model perform if only alignment losses are used as anomaly scores during inference?\", \"r_to_w3\": \"Thanks for your insightful suggestion! Following your suggestion, we include experiment results as follow to verify the performance of using cosine distance for the decomposition loss and using alignment loss as anomaly score for inference. We have also included these results in Table 2, Table 3 in main text and Table 11 to 14 in Appendix 10 in revised version.\\nWe could observe that using cosine distance for the decomposition loss achieves comparable performance to using L2 distance (ours). During inference, DRL only uses decomposition loss as anomaly score as detailed in Section 3.2.3 in original paper. \\nWhen using alignment loss as the anomaly score for inference, we observed a degradation in performance. This is because the alignment loss is calculated in the observation space, which is prone to the observation entanglement issue discussed in Section 3.1 of the original paper. As a result, it might be inefficient to use alignment loss to distinguish the anomalous samples from normal ones. \\n\\n\\n| | Backdoor | Fault | Imgseg | Lympho | Pendigits | Vowels | Wbc | Average of 40 data |\\n|----------------------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|--------------------|\\n| Decomposition w/ cosine distance | 0.8808 | 0.6534 | 0.9036 | 0.9868 | 0.9029 | 0.4307 | 0.9417 | 0.7130 |\\n| Inference w/ alignment loss | 0.8787 | 0.6059 | 0.8318 | 0.8278 | 0.7157 | 0.3160 | 0.9111 | 0.6100 |\\n| **DRL (ours)** | **0.8915** | **0.6649** | **0.9238** | **1.0000** | **0.9360** | **0.4506** | **0.9742** | **0.7344** |\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer hqNX,\\n\\nThank you very much again for your time and efforts in reviewing our paper.\\n\\nWe kindly remind that our discussion period is closing soon.\\n\\nWe just wonder whether there is any further concern and hope to have a chance to respond before the discussion phase ends.\\n\\nMany thanks, Authors\"}", "{\"title\": \"Reviewer's Response\", \"comment\": \"I want to thanks the authors for taking the time to answer all my questions. I will structure my review as follows. I will first respond to all of your answers. Then, I will summarize my re-review. Lastly, I will give my revised and final score of the manuscript.\\n\\n## Questions\\n**Q1**:\\n\\n- _*T1*_:\\n - (1) That it will fail with anomalous samples is not a guarantee. That is why it is important to verify when this will happen. \\n - (3) Figure 12 does, in fact, exemplify a counterargument I made in my original Review. Particularly at the end of weakness T.2 --- see **breastw** and **ionosphere**, for example. My particular worry is that pure distance in the embedding space does not have to translate to actual performance, as the proposed score is a reconstruction error and not a hyperplane in the embedding space.\\n - (4) It does show examples of it in cases where it works, but as it does not contain all of the datasets I cannot judge it. \\n\\n- _*T2*_: I want to thank the authors for their efforts. This is, indeed, what I asked for.\\n\\n- _*T3*_: I still disagree regarding the reconstruction as a score. As I already mentioned, an increase in absolute distance does not have to carry over as an increase in reconstruction error. Additionally, the experiment in Fig. 17 are insufficient to say that, empirically, it works.\\n\\n\\n**Q2**: I do agree that Figure 14 and 15 demonstrate a good separation power. However, i cannot be used as a measure of performance. For instance, the images are T-SNE representations of the embedding space, thus an overlap in the image does not have to mean that it will overlap in the actual embedding of each method. I do agree, however, with the last claim, and believe that the authors successfully proved that in the rebuttal.\\n\\n**Q3**: Thank you for taking care of this. I found Table 2 especially helpful. \\n\\n## Summary\\n\\nMy review raised multiple issues with the manuscript. Specifically, about (1) the theoretical guarantees of separating anomalies from inliers, (2) the increase in performance that this will take, and (3) that normal samples are better represented with the randomized basis in the embedding space than anomalies. \\n\\n*I believe that the authors successfully proved, both theoretically and experimentally (1). However, (2) was left unproven theoretically and addressed partially empirically. (3) still seems to be more of a hypothesis made about the data rather than an actual fact. However, based on the performance of the method in practice, I believe that there are some grounds for this being a general enough occurrence to be used in practice.*\\n\\nAdditionally, the authors gave the code for the experiments after asking for it. However, this was past the deadline for the appendix (which this counts as), and the authors specifically stated that they did not share it because of missing the deadline. \\n\\n## Final score\\n\\nI will increase my score in soundness significantly, as the authors partially addressed the separation question. While the authors did not provide a theoretical proof as to why the method performed well in general, I believe there is an understanding as to why it does perform well when anomalies cannot be represented with the same base as the inliers. While it is not enough for a general outlier detection method, due to its overall performance, and its good separation power of its representation, it has merit as an embedding technique for other outlier detection methods to utilize. That is why I chose to increase my final score of the manuscript.\"}", "{\"title\": \"Response (Part 2/2)\", \"comment\": \"W2: The reviewer is concerned about the efficiency of the proposed method. Although authors have shown the runtime in A.5, more comparison with other baselines in terms of runtime will improve the persuasiveness of this aspect.\", \"r_to_w2\": \"Thanks for your valuable suggestion!\\n\\nWe agree that we need to compute a unique weight vector for each sample. However, we optimize a shared weight learner to calculate the weight vectors, rather than directly optimizing a unique weight vector for each sample individually. This significantly reduces the computational cost.\\n\\nBesides, we kindly remind you that the default number of basis vectors (i.e., the dimension of the weight vector) is set to 5 across all datasets. We also provide a sensitivity analysis in Fig. 9 of Appendix 6 in the original paper, where the results show that setting the number of basis vectors to 5 is sufficient to achieve high performance. Therefore, this number of basis vectors does not significantly increase the model complexity. Additionally, the weight learner is implemented as a simple two-layer fully connected MLP with Leaky ReLU activation, which does not require a large number of parameters.\\n\\nFollowing your suggestion, we now include runtime comparisons with other baselines, which is summarized into Table 8 in Appendix (A.5) in our revision. The results show that our proposed DRL method is computationally efficient. For example, among recent baselines, MCM requires the generation of multiple learnable mask matrices, which increases training costs. NPT-AD, on the other hand, involves a high computational cost in terms of memory and time, due to its reliance on the training set during inference.\", \"w3\": \"Code needs to be released.\", \"r_to_w3\": \"Thanks for your valuable feedback!\\nDue to time constraints at the time of submission, the code was not available. However, we have now released the source code to facilitate reproducibility. We hope this addresses your concern.\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer hbkx,\\n\\nThank you very much again for your time and efforts in reviewing our paper.\\n\\nWe kindly remind that our discussion period is closing soon.\\n\\nWe just wonder whether there is any further concern and hope to have a chance to respond before the discussion phase ends.\\n\\nMany thanks, Authors\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer Sgt1,\\n\\nThank you very much again for your time and efforts in reviewing our paper.\\n\\nWe kindly remind that our discussion period is closing soon.\\n\\nWe just wonder whether there is any further concern and hope to have a chance to respond before the discussion phase ends.\\n\\nMany thanks, Authors\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer TJDb,\\n\\nThank you very much again for your time and efforts in reviewing our paper.\\n\\nWe kindly remind that our discussion period is closing soon.\\n\\nWe just wonder whether there is any further concern and hope to have a chance to respond before the discussion phase ends.\\n\\nMany thanks, Authors\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for kindly increasing the score and for taking the time to review our response.\"}", "{\"title\": \"Follow-up feedback\", \"comment\": \"I appreciate the authors' response. Their replies addressed my three concerns about this work, so I have updated my rating accordingly.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for taking the time to review our response and for your positive feedback.\"}", "{\"title\": \"Response (Part 3/3)\", \"comment\": \"Q2: The authors should consider addressing concern E.1, to at least experimentally verify T.2 by extend the experiment results of Figure 1 in original paper.\", \"r_to_q2\": \"Thanks for your valuable suggestion! Following your suggestion, we extend the experiment results of Figure 1, and provide the extensive visualizations in Fig.14 and Fig.15 of Appendix 10 in revised version.\\nWe can observe that ignoring observation entanglement in tabular anomaly detection under the one-class classification setting can lead to diminished discriminative power between learned normal and anomalous patterns, as the overlap between normal and anomalous representations within the latent space of deep models may obscure the distinction between them.\\nOur proposed method can amplify the discrepancy between the two (inlier and outlier) patterns.\", \"q3\": \"Unclear description of Section 5.2.\", \"r_to_q3\": \"Thanks for your insightful suggestion! We apologize for any confusion and we have made improvements to the clarity of Section 5.2 in the revised version for better illustration.\\nFollowing your suggestion, we merge the ablation study and the comparison of DRL and variants in the observation space into the same section.\\nDue to the limited space, we summarize the results over 40 datasets in Table 2 in the revised version. We also provide the full results with the reference of data in Table 11 to Table 14 in Appendix 10 of revised version.\\nWe also make the description of variant B (currently named variant E) clear in Section 5.2 as mentioned in E.2.\", \"q4\": \"Code needs to be released.\", \"r_to_q4\": \"Thanks for your valuable feedback!\\nDue to time constraints at the time of submission, the code was not available. However, we have now released the source code to facilitate reproducibility. We hope this addresses your concern.\", \"additional_remarks\": \"\", \"r1\": \"P-values are not a scalar metric.\", \"r_to_r1\": \"Thanks for your valuable suggestion! We revised the Fig.4 in the revised paper.\", \"r2\": \"Considering using a multiple comparison test.\", \"r_to_r2\": \"Thanks for your insightful suggestion! We would use a multiple comparison, for example, the Conover-Iman test, in the final revision.\"}", "{\"title\": \"Response (Part 1/2)\", \"comment\": \"W1: Whether the linear decomposition of representations could capture the complex normal distribution.\", \"r_to_w1\": \"Thanks for your valuable feedback! We would address your concern from the following perspectives.\\n\\n(1) It is worth noting that for anomaly detection, the key insight is that a higher anomaly score indicates higher confidence in a sample being anomalous. Therefore, even if the decomposition does not perfectly capture the normal sample distribution, as long as it better represents normal samples than anomalous ones, the model can still achieve accurate anomaly detection. Below, we will explain that decomposition is sufficient to express the normal samples and distinguish anomaly from normal samples.\\n\\n(2) We agree that the true distribution of normal samples in raw data space can be complex, especially in high-dimensional spaces. However, our approach remaps the raw data into a latent space, where we enforce the decomposition of representations. In other words, we learn more expressive, task-specific features from the raw data and perform the decomposition on these learned representations, rather than directly on the raw observations.\\n\\n(3) To accurately capture the statistical characteristics of normal samples and distinguish them from anomalies, we introduce a set of fixed and shared basis vectors to represent the global structure of the normal data, where the orthogonal basis vectors $\\\\mathcal{B}$ are introduced to enforce the diversity and dependencies among them. Given basis vectors $\\\\mathcal{B}$, each representation $\\\\mathbf{h}$ in this space can be expressed as $\\\\mathbf{w}\\\\mathcal{B}$, where $\\\\mathbf{w}$ is the weight vector (denoting the coordinates) and specific to each representation. We optimize the decomposition loss (Eq.3 in original paper) to expect that all normal samples' representations can reside in such subspace, where we show the convergence of the decomposition loss on the training set in Fig. 18 and 19 in Appendix 10 in revised version.\\n\\n(4) Besides, as shown in Fig. 12 in Appendix 10 of revised version, we provide the T-SNE visualization of representation $\\\\mathbf{h}$ by feature extractor $f$ and the reconstructed representation $\\\\tilde{\\\\mathbf{h}}$ by linear combination of basis vectors $\\\\mathbf{w}\\\\mathcal{B}$ over all datasets. We can observe a notable overlap between $\\\\mathbf{h}$ and $\\\\tilde{\\\\mathbf{h}}$ of normal samples, and a significant distinction between $\\\\mathbf{h}$ and $\\\\tilde{\\\\mathbf{h}}$ of anomalous samples. This demonstrates that compared to anomalous samples, normal sample representations can be better modeled as a mixture of fixed basis vectors, making it reasonable to distinguish anomaly from normal samples with the decomposition loss (i.e., anomaly score). \\n\\n(5) To further verify the failure of modeling anomalous representations as a mixture of fixed basis vectors, we provide experiment results in Fig. 17 of Appendix 10 in revised version. We can find that the anomaly score of anomalous samples is significantly larger than that of normal samples, especially when we introduce the separation loss (minimizing the similarity between weight vectors of normal samples).\"}", "{\"title\": \"Response (Part 1/3)\", \"comment\": \"Q1: Authors need to solve T1, T2 and T3.\", \"r_to_q1\": \"Thanks for your insightful comment! We would like to address T1, T2, and T3 respectively.\", \"t1\": \"The assumption that the representation of each normal sample in the latent space can be effectively modeled as a mixture of fixed basis vectors with specific mixture proportions needs to be verified.\", \"r_to_t1\": \"Thanks for your valuable comment!\\n\\n(1) It is worth noting that for anomaly detection, the key insight is that a higher anomaly score indicates higher confidence in a sample being anomalous. Therefore, even if the decomposition does not perfectly capture the normal sample distribution, as long as it better represents normal samples than anomalous ones, the model can still achieve accurate anomaly detection. Below, we will explain that decomposition is sufficient to express the normal samples and distinguish anomaly from normal samples.\\n\\n(2) To accurately capture the statistical characteristics of normal samples and distinguish them from anomalies, we introduce a set of fixed and shared basis vectors to represent the global structure of the normal data, where the orthogonal basis vectors $\\\\mathcal{B}$ are introduced to enforce the diversity and dependencies among them. Given basis vectors $\\\\mathcal{B}$, each representation $\\\\mathbf{h}$ in this space can be expressed as $\\\\mathbf{w}\\\\mathcal{B}$, where $\\\\mathbf{w}$ is the weight vector (denoting the coordinates) and specific to each representation. We optimize the decomposition loss (Eq.3 in original paper) to expect that all normal samples' representations can reside in such subspace, where we show the convergence of the decomposition loss on the training set in Fig. 18 and 19 in Appendix 10 in revised version.\\n\\n(3) Besides, as shown in Fig. 12 in Appendix 10 of revised version, we provide the T-SNE visualization of representation $\\\\mathbf{h}$ by feature extractor $f$ and the reconstructed representation $\\\\tilde{\\\\mathbf{h}}$ by linear combination of basis vectors $\\\\mathbf{w}\\\\mathcal{B}$ over all datasets. We can observe a notable overlap between $\\\\mathbf{h}$ and $\\\\tilde{\\\\mathbf{h}}$ of normal samples, and a significant distinction between $\\\\mathbf{h}$ and $\\\\tilde{\\\\mathbf{h}}$ of anomalous samples. This demonstrates that compared to anomalous samples, normal sample representations can be better modeled as a mixture of fixed basis vectors, making it reasonable to distinguish anomaly from normal samples with the decomposition loss (i.e., anomaly score). \\n\\n(4) To further verify the failure of modeling anomalous representations as a mixture of fixed basis vectors, we provide experiment results in Fig. 17 of Appendix 10 in revised version. We can find that the anomaly score of anomalous samples is significantly larger than that of normal samples, especially when we introduce the separation loss (minimizing the similarity between weight vectors of normal samples).\"}", "{\"summary\": \"This paper introduces Decomposed Representation Learning (DRL), a new framework for anomaly detection in tabular data. This approach aims to overcome the limitations of traditional reconstruction-based methods, which often struggle with data entanglement, especially in tabular settings where feature heterogeneity can obscure the separation between normal and anomalous samples. DRL remaps data into a latent space, enforcing each normal sample's representation as a linear combination of orthogonal basis vectors that are both data-free and training-free. Furthermore, DRL introduces a constraint to amplify discrepancies between normal and anomalous patterns in the latent space.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well-motivated, and the technical details are illustrated properly.\\n2. The proposed method is supported by a theoretical analysis of the constraint to maximize the discrepancy between normal and anomalous samples.\\n3. The authors conducted comprehensive experiments to demonstrate the effectiveness of the proposed DRL.\", \"weaknesses\": \"1. DRL relies on decomposing the representation of normal samples into linear combinations of orthogonal basis vectors, and this decomposition assumes that normal samples are well described by fixed orthogonal bases in the potential space. However, the true distribution of normal samples in complex, high-dimensional feature spaces may not be captured simply by a small number of basis vectors. The reviewer is concerned about whether the decomposition may not be sufficient to completely express the normal sample features if the distribution of high-dimensional data is complex or contains nonlinear feature associations, which in turn leads to inaccurate anomaly detection.\\n\\n2. The decomposition and separation constraints of this method require the computation of unique weight vectors for each sample, which may incur significant computational costs on high-dimensional data and large-scale datasets. Also, the introduction of the weight learner increases the model complexity, especially when the number of orthogonal basis vectors is large. The reviewer is concerned about the efficiency of the proposed method. Although authors have shown the runtime in A.5, more comparison with other baselines in terms of runtime will improve the persuasiveness of this aspect.\\n\\n3. The reproducibility of this work is limited. The reviewer could not validate this work as the source code was not released.\", \"questions\": \"Please refer to weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Decomposed Representation Learning (DRL), a novel approach for addressing the entanglement issue in representation learning within reconstruction-based Tabular Anomaly Detection (TAD). DRL mitigates this problem by employing orthogonal bases in a training-free, data-free manner to remove latent space dependencies, implemented through the Gram-Schmidt process for Decomposition Loss. It also maximizes variance in the weights of normal latent bases, enhancing discrepancy with abnormal latents as supported by Proposition 1, and reconstructs the original input from latent representations to prevent task-irrelevant features. The paper validates DRL\\u2019s effectiveness with extensive experiments across 40 ADBench datasets and 15 baseline models, achieving state-of-the-art performance through detailed ablation studies and T-SNE visualizations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper effectively addresses limitations in reconstruction-based methods by comparing DRL with recent state-of-the-art models such as NPT-AD and MCM, demonstrating its superior performance. This is a novel contribution to the field of tabular anomaly detection.\\n2. Theoretical support for the separation loss, specifically Proposition 1, provides a solid basis for the model's structure and validates its design choices.\\n3. Extensive experimentation across 40 ADBench datasets and 15 baseline models showcases the robustness and effectiveness of the DRL method, and ablation studies and T-SNE visualizations add valuable insights into its workings.\", \"weaknesses\": \"1. Lack of KNN Baseline Comparison: While the paper includes comparisons with several state-of-the-art methods, there is no performance comparison with KNN, a model often effective in tabular data tasks. Including KNN as a baseline would enhance the experimental section by providing insights into how DRL performs relative to a widely recognized tabular anomaly detection approach. I recommend the authors consider adding KNN results and explain how DRL\\u2019s design is particularly beneficial over KNN in anomaly detection.\\n\\n2. Loose Bound in Separation Loss Calculation: In calculating the separation loss lower bound, the authors remove all terms involving squared expectations, which might lead to a relatively loose bound. It would be helpful if the authors could explore alternative derivations for this bound that might be tighter or justify the sufficiency of the current approach. Additionally, it would be useful to discuss how this looseness could potentially impact performance or theoretical guarantees of the DRL method.\\n\\n3. Clarity in Section 2 and Lack of MCM Limitations: Some portions of the paper, particularly Section 2 (Preliminaries), could benefit from clearer explanations. For instance, a more detailed explanation of theta would aid readers in following the theoretical framework more easily. Additionally, while the authors discuss the disadvantages of NPT-AD, the limitations of MCM are not mentioned. Including a brief discussion on MCM's limitations would help balance the comparison and highlight DRL's unique advantages more clearly.\", \"questions\": \"1. What would happen if the basis vectors were set as unit vectors, for example, [1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1] ?\\n2. The paper states that cosine distance improves performance as L2 distance leads to optimization issues. Could the authors explore how performance might change with L1 distance, or clarify why cosine distance was chosen specifically over both L2 and L1?\\n3. Proposition 1 proposes that maximizing variance of weights attached to basis vectors in normal latents enhances separation from abnormal latents. Are there trade-offs in terms of model stability or convergence when applying this approach?\\n4. How sensitive is DRL\\u2019s performance to hyperparameter choices, such as the number of orthogonal basis vectors or weights in loss terms? Are there specific recommendations or guidelines for tuning these parameters across different types of tabular datasets?\\n5. What are the main limitations of DRL as identified by the authors?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present a Novel reconstruction-based outlier detection (R-OD) method for tabular data called DRL. DRL's novelty among other R-OD methods is that it focuses on representing the data $\\\\mathbf{x} \\\\in \\\\mathbb{R}^d$ as a linear combination of a randomly selected basis $\\\\mathcal{B}$ of $\\\\mathbb{R}^D$, with $d>D$. To this end, the network learns an encoder $f_{\\\\theta_f}$ from $\\\\mathbb{R}^d$ to $\\\\mathbb{R}^D$, a decoder $g_{\\\\theta_g}$ and a weight learning function $\\\\phi_{\\\\theta_\\\\phi}$ that is in charge of learning such weight representation. The authors also include a novel loss function to train DRL, focusing on 3 different aspects. (i) It learns an embedding function $f$ that agrees with a linear combination representation of the embedding $\\\\phi$ in the $L_{decomposition}$ loss, (ii) it focuses on separating the normal samples between themselves in the $L_\\\\text{separation}$ loss and (iii) it reconstructs the embedding by $f$ with $g$ in the $L_{aligment}$ loss. Furthermore, the authors include a theoretical result that motivates the use of $L_\\\\text{separation}$, by proving that an increase in the total $L_2$ distances of the representations of inlier data during training, leads to a greater expected distance between an inlier and an outlier (Proposition 1).\\n\\nAdditionally, the authors include an extensive list of real-world experiments in the main text, with 40 real-world datasets and 15 relevant competitors. In particular, the authors tested DRL's One-class classification (OCC) performance, provided an ablation study, and studied different types of distances and base selection strategies. The appendix includes further experiments with different types of synthetic outliers, sensibility to the parameters, robustness study and computational cost analysis among others.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper has a nice flow in the presentation.\\n2. The idea of using a linear combination of a projected random basis to represent the data is an interesting idea with big explainability potential.\\n3. The paper is well motivated, as the problem of OCC in tabular data is an important task in ML.\\n4. There is a large list of different experiments.\\n5. The authors use a statistical comparison test to provide a statistical significance analysis of the main experimental results.\\n6. The authors include the pseudo-code of the method in the appendix.\", \"weaknesses\": \"There is not enough evidence to support the claims that the authors make in the paper, both theoretically and experimentally.\", \"particularly\": \"### Theory\\nT.1. The authors claim that DRL \\\" (...) assumes that the representation of each normal sample in the latent space can be effectively modeled as a mixture of fixed basis vectors with specific mixture proportions\\\" (L160-161). This assumption is central to the method's idea, and it goes unsupported both in theory (no theoretical example of data behaving as such) and practice (no example proving that given such a theoretical example, one could extract exactly such representation). As an example, consider the manifold learning (ManL) literature. Assuming that data comes from a lower dimensional manifold $\\\\mathcal{M}$ is a big assumption, however, there are examples of such data being generated by synthetic means, and also examples of ManL methods properly learning the representations ---see figure 5 in [Meila & Zhang]. \\n\\nT.2. The authors further claim that Proposition 1 proves that the outliers are going to be far away from the inliers (i.e., separated). While technically true on average, Proposition 1 does not \\\"(...) amplify the discrepancy between the two (inlier and outlier) patterns\\\" L224. What proposition 1 shows is that, if one increases the variance of the learned weights $w_i$, the process of measuring the distance between an inlier's representation and an outlier's representation will be, on average, higher. This, however, does not imply that, necessarily, you will increase the distance of the outliers with the total set of inliers. For example, one could obtain a representation that places a large set of outliers in the centroid of the inliers, with the remaining inliers being sparse around the centroid.\\n\\nT.3. DRL's performance is not explained in the theoretical derivations. The authors focus on proving how the representation that they learn can \\\"separate\\\" outlier from inlier, but it is not clear how an increase in distance can affect the final scoring function (they use $L_\\\\text{decomposition}$ as a final score). Particularly, it is not clear to me how it can affect the encoder $f$.\\n\\n\\n### Experiments\\nWhile this paper contains a large collection of experiments, they do not focus on verifying the theoretical claims of the method.\\n\\nE.1. The authors include experiments to try to prove the claim mentioned in weakness number T.2 in Figure 1. However, out of the total list of datasets (40) and competitors (15), they only use 2 datasets and 1 competitor without any clear reason. This does not contain sufficient evidence that the outliers and the inliers can be separated by Proposition 1. \\n\\nE.2. Weakness T.1 addresses that the random basis reconstruction assumption has no example in theory. In practice, this assumption seems to not be properly explored. The authors include an ablation study in Table 2 where they compare different versions (including one employing only the basis reconstruction assumption) of DRL. However, they only included 7 out of the 40 datasets introduced earlier. \\n\\nAdditionally, the authors include in Figure 5 a comparison between different variants of DRL. Particularly, variant B \\\"applies the decomposition loss to observations, assuming that each normal sample can be decomposed into a set of orthogonal basis vectors (...)\\\" L472-473. This variant ranked third to last among 8 different versions of the method. Furthermore, it is not possible to compare this performance to the other detectors as the authors do not say which methods they use, and only report an average PR-AUC. \\n\\nE.3. At the time of this review, there is no code available for DRL, making it not reproducible.\", \"questions\": \"I kindly ask the authors to address the following questions & concerns in order to improve the manuscript:\\n\\n1. The performance of the method is not properly verified (see T.1,T.2,T.3). It will greatly improve the paper to address, particularly T.1 and T.2. For T.2, I suggest to prove that $\\\\|w_a - \\\\mu_{w_i} \\\\|$ has a lower bound greater than the increase in $\\\\|w_i\\\\|$, where $\\\\mu_{w_i}$ is the centroid of the inlier's representation in basis $\\\\mathcal{B}$.\\n\\n2. The authors should consider addressing concern E.1, to at least experimentally verify T.2. Is there any particular reason why not all datasets and competitors were considered (at the very least as an image in the appendix)?\\n\\n3. The authors should consider rewriting section 5.2. It is unclear to me the difference between the ablation study presented in the section and the *Comparison of DRL and Variants in the Observation Space*. For instance, why are they separate sections if they seem to try to study the same thing? Why is there no reference to the datasets used in *Comparison of DRL and Variants in the Observation Space*, and why only the average PRAUC and not the full table of results? Why did the authors consider only 7 datasets out of the total 40 in the ablation study?\\n\\n4. The author should strongly consider releasing the code for the method. Reproducibility is a crucial part of the scientific process, and without the code, this paper cannot be reproduced. Is there any strong reason as to why the code was not available at the time of the review?\\n\\nI will consider changing my rating upon successfully addressing these questions during the rebuttal. \\n\\n\\n\\n### Additional remarks that did not influence the score\", \"a_couple_of_additional_remarks_that_might_improve_the_manuscript\": \"R1. P-values are not a scalar metric. This means that a difference between .20 to .90 is the same as a difference between .0501 and .0502 (when selecting a critical value of 0.05) [Lavine]. Thus, authors should consider removing Figure 4 for something more meaningful (see [Campos et al] results section).\\n\\nR2. The Wilcoxon signed-rank test is a single-population comparison test. This means that it is strictly designed to work in single comparisons. The setting presented in the experiments is a multiple comparison setting, thus a multiple comparison test should be used in this regard (see, for example, the Conover-Iman test [Conover]). \\n\\n\\n### References \\n\\n[Melia & Zhang] Meil\\u0103, M., & Zhang, H. (2023). Manifold learning: what, how, and why. https://arxiv.org/abs/2311.03757\\n\\n[Lavine] M. Lavine, \\u201cIntroduction to Statistical Thought\\u201d \\n\\n[Campos et al] Campos, G.O., Zimek, A., Sander, J. et al. On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study. Data Min Knowl Disc 30, 891\\u2013927 (2016). https://doi.org/10.1007/s10618-015-0444-8\\n\\n[Conover] WJ Conover (1979), Multiple-Comparisons Procedures.\\n\\n\\n\\n### Update\\n\\nAfter the discussion with the authors, I have decided to increase my score from 3 to 6. See the comments below for more information.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer hbkx,\\n\\nThe discussion period will end soon (Dec 2nd), you have raised some further question on motivation, which we have provided further explanations. We want to check if our response has addressed your questions and concerns. We also noticed that other reviewers have updated their ratings (Reviewer hqNX has increased the score from 5 to 6, Reviewer Sgt1 has increased the score from 5 to 6, and Reviewer TJDb has increased the score from 3 to 6). If necessary, please feel free to provide any additional feedback or ask further questions. Again, thank you for the time spent on reviewing and discussing the manuscript.\"}", "{\"comment\": \"I appreciate that the authors address most of my concerns, but my main issue W1 remains unresolved. To me, empirical observations in the latent space fail to adequately explain the advantages of the proposed method, as reconstruction-based methods are expected to produce entangled latent features of normal and anomaly classes. Based on this, I am inclined to keep my previous rating. I will further discuss with the peer reviewers for a final recommendation.\"}", "{\"title\": \"Response (Part 4/4)\", \"comment\": \"Q4: How sensitive is DRL's performance to hyperparameter choices, such as the number of orthogonal basis vectors or weights in loss terms? Are there specific recommendations or guidelines for tuning these parameters across different types of tabular datasets?\", \"r_to_q4\": \"Thanks for your insightful comment! We apologize for any confusion. Actually, as mentioned in the last paragraph of the section about the experiments in the original version, we had provided the sensitivity analysis results in the Fig. 9 in Appendix (A.6) due to space limitations, including the sensitivity analysis for the number of basis vectors, the hyper-parameters about loss weight, the number of training epochs and the number of batch size. Below, we will give a more detailed explanation of the sensitivity analysis.\\n\\nAs the very beginning, the performance increases rapidly as the number of orthogonal basis vectors increases and then the performance is stable. Thus, it is sufficient to set the number of orthogonal basis vectors to 5. For the loss weight $\\\\lambda_1$ associated with separation loss, we find that the performance is generally robust across different values, but lower values of $\\\\lambda_1$ tend to yield relatively better results.\\nTherefore, a lower $\\\\lambda_1$ is more suitable.\\nSimilarly, for the loss weight $\\\\lambda_2$ associated with alignment loss, we observe that lower values of $\\\\lambda_2$ lead to better AUC-ROC performance. Thus, a lower $\\\\lambda_2$ is more suitable.\\nRegarding the number of training iterations, we found that the performance stabilizes after 200 iterations, so it is sufficient to set the total training iterations to 200.\\nThe results also demonstrate that the performance remains robust across different batch sizes. \\nAs noted in Line 372 of the original paper, the hyperparameters were kept consistent across all datasets.\", \"q5\": \"What are the main limitations of DRL as identified by the authors?\", \"r_to_q5\": \"The primary limitation of the current DRL method is that it is designed specifically for tabular anomaly detection. Although our method can be used to different data types for the anomaly detection task by introducing representation\\ndecomposition, we may need to design the specific architecture for the weight learner and alignment learner due to the difference between data types. In the future, we plan to explore its application to other data types, where incorporating prior structural knowledge from these data types might be a possible solution. We have added the discussion on the limitations and future work in Appendix 11 in our revision.\"}", "{\"title\": \"Response (Part 2/3)\", \"comment\": \"T2: The authors need to further prove that the distance of the outliers from the total set of inliers could be increased with separation loss.\", \"r_to_t2\": \"Thanks for your insightful suggestion! Let $\\\\mathbf{w}_n$ and $\\\\mathbf{w}_a$ denote the computed weights of normal and anomalous samples respectively. To empirically prove that $|\\\\mathbf{w}_a - \\\\mu _ {\\\\mathbf{w}_n}|$ has a lower bound greater than the increase in the variance of $|\\\\mathbf{w}_n|$ when introducing the separation loss, we calculate the $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_a - \\\\mu _ {\\\\mathbf{w}_n}\\\\|_2^2\\\\right]$ and $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_n - \\\\mu _ {\\\\mathbf{w}_n}\\\\|_2^2\\\\right]$, where we consider two variants, without separation loss and with separation loss. We performed the statistics on all of the datasets and add the results in Table 10 in Appendix 10 of our revised version. Due to the limited space, we provide partial experiment results as follows. We can find that, introducing the separation loss can indeed increase the variance of normal samples. However, the $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_a - \\\\mu _ {\\\\mathbf{w}_n}\\\\|_2^2\\\\right]$ has a more greater growth.\\nAdditionally, we also theoretically prove that $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_a - \\\\mu _ {\\\\mathbf{w}_n}\\\\|_2^2\\\\right] - \\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_n - \\\\mu _ {\\\\mathbf{w}_n}\\\\|_2^2\\\\right]$, can be amplified by increasing the variance of $\\\\|\\\\mathbf{w}_n\\\\|_2$, as illustrated in Proposition 2 in Appendix 10 of revised version.\\n\\n\\n| | | abalone | amazon | annthyroid | arrhythmia | backdoor | breastw | campaign | cardio | Cardiotocography | census | Average of 40 data |\\n|---------------------------------|-------------------------------------------------------------------|---------|--------|------------|------------|----------|---------|----------|--------|------------------|---------|--------------------|\\n| w/o separation | $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_a - \\\\mu _ {\\\\mathbf{w}_n}\\\\|_2^2\\\\right]$ | 0.2550 | 0.1126 | 0.1717 | 0.5577 | 0.1916 | 0.1738 | 0.0057 | 0.3347 | 0.2055 | 0.0270 | 0.2228 |\\n| w/o separation | $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_n - \\\\mu _ {\\\\mathbf{w}_n}\\\\|_2^2\\\\right]$ | 0.1411 | 0.1021 | 0.0744 | 0.3935 | 0.0995 | 0.0307 | 0.0041 | 0.1836 | 0.1439 | 0.0274 | 0.1274 |\\n| w/o separation | Gap | 0.1139 | 0.0105 | 0.0973 | 0.1642 | 0.0921 | 0.1431 | 0.0016 | 0.1511 | 0.0616 | -0.0004 | 0.0954 |\\n| w/ separation | $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_a - \\\\mu _ {\\\\mathbf{w}_n}\\\\|_2^2\\\\right]$ | 0.5647 | 0.3276 | 0.3049 | 0.6160 | 0.2893 | 0.2080 | 0.8078 | 0.3935 | 0.2411 | 0.8362 | 0.3791 |\\n| w/ separation | $\\\\text{E}\\\\left[\\\\|\\\\mathbf{w}_n - \\\\mu _ {\\\\mathbf{w}_n}\\\\|_2^2\\\\right]$ | 0.2316 | 0.1928 | 0.1859 | 0.4333 | 0.1678 | 0.0480 | 0.1964 | 0.2077 | 0.1739 | 0.1206 | 0.1771 |\\n| w/ separation | Gap | 0.3331 | 0.1348 | 0.119 | 0.1827 | 0.1215 | 0.16 | 0.6114 | 0.1858 | 0.0672 | 0.7156 | 0.2020 |\", \"t3\": \"How separation loss can affect the encoder $f$ and how an increase in distance can affect the final scoring function.\", \"r_to_t3\": \"Thanks for your valuable comment! We are sorry for any confusion. Based on the responses to T1 and T2, we have proved that the learned representations can \\\"separate\\\" outliers from inliers. We further visualize the T-SNE of the learned representations of encoder w/o and w/ separation loss over all datasets in Fig.13 in Appendix 10 of revised version to verify the representation separation. We can observe that with separation loss, the discriminative distinction between normal and anomalous patterns within the latent space can be enhanced. This constraint can facilitate the capturing of shared information within normal patterns by encoder $f$.\\n\\nFor the concern about how an increase in distance can affect the final scoring function, please refer to the part (3) and (4) of response to T1.\\nIn addition, the ablation results provided in Table 2 of original paper also verifies that the separation loss is crucial to the model performance.\"}", "{\"title\": \"Response (Part 4/4)\", \"comment\": \"W6: Can the proposed method be applied to other data types (e.g., image or time series data)?\", \"r_to_w6\": \"Thank you for your insightful question!\\n\\nAs classical methods struggle to capture complex patterns and relationships in high-dimensional spaces [1], recent studies have prompted a shift toward deep learning methodologies for tabular data.\\nAlthough fruitful progress has been made in the last several years, capturing the comprehensive normal patterns that are distinct from anomalous patterns for tabular data remains a challenging task, as real-world data may exhibit data entanglement between normal and anomalous samples.\\nIgnoring observation entanglement in tabular anomaly detection under the one-class classification setting can lead to diminished discriminative power between learned normal and anomalous patterns.\\nWe attribute this challenge to the intrinsic heterogeneity of features in tabular data, which aligns with recent findings [2] indicating that neural networks struggle to distinguish regular and irregular patterns, particularly when confronted with numerous uninformative features present in tabular data.\\nOur method re-maps observations into a tailor-designed constrained latent space, where normal and anomalous patterns are more effectively distinguished, thereby alleviating the entanglement problem.\\n\\nIn addition, when considering perceptual data (e.g., image and text), many methods have demonstrated significant success by leveraging the structure of the input data.\\nFor example, images can be rotated, and the ability to distinguish between different rotations varies between anomalies and normal samples.\\nHowever, tabular data lacks such prior structural properties, and our DRL does not rely on prior knowledge of the data structure, making it especially effective for tabular data.\\n\\nWe believe that applying the insights from DRL to other data types is meaningful and it will be an important direction of our future work.\\n\\n[1] Deep learning for anomaly detection: A review. [ACM Computing Surveys 2021]\\n\\n[2] Why do tree-based models still outperform deep learning on typical tabular data? [NeurIPS 2022]\", \"w7\": \"This is a minor point, but the paper lacks a discussion on limitations and future work.\", \"r_to_w7\": \"Thank you for your valuable suggestion! We agree that discussing limitations and future work is important. Although our method can be used to different data types for the anomaly detection task by introducing representation decomposition, we may need to design the specific architecture for the weight learner and alignment learner due to the difference between data types. In the future, we plan to explore its application to other data types, where incorporating prior structural knowledge from these data types might be a possible solution. We have added the discussion on the limitations and future work in Appendix 11 in our revision.\"}", "{\"comment\": \"Dear Reviewer hqNX,\\n\\nThe discussion period will end soon (Dec 2nd), we want to check if our response has addressed your questions and concerns regarding our paper. Please let us know if you have any follow-up comment or question regarding our manuscript. We also noticed that other reviewers have updated their ratings (Reviewer Sgt1 has increased the score from 5 to 6, and Reviewer TJDb has increased the score from 3 to 6). If necessary, please feel free to provide any additional feedback or ask further questions. Again, thank you for the time spent on reviewing and discussing the manuscript.\"}", "{\"title\": \"Response (Part 3/4)\", \"comment\": \"Q2: How performance might change with L1 distance, and clarify why cosine distance was chosen specifically over both L2 and L1.\", \"r_to_q2\": \"Thank you for your thoughtful comment! We have added results using the L1 distance metric for comparison as follows. We also included the results in Table 3 in main text, Table 13 and Table 14 in Appendix 10 of revised version. The results show that L1, L2, and cosine distances all yield promising performance, with cosine distance demonstrating relatively superior and more stable results.\\nIn the original paper, we used cosine distance for both separation and alignment loss. We now provide further clarification on why cosine distance was chosen over both L1 and L2 distances.\\n\\nOur primary objective is to minimize the decomposition loss, which also serves as the anomaly score during inference. Separation loss acts as a constraint to enhance the discriminative power between normal and anomalous patterns within the latent space, thereby helping to better capture the distinct normal patterns through decomposition loss. \\nSince separation loss is implemented by enforcing separation among the weights of normal training samples, it has the potential to affect the convergence of the training process. To mitigate this, we use cosine distance for separation loss, as it ensures the values are bounded within a smaller range compared to both L1 and L2 distances, which helps avoid convergence issues during training.\\n\\nAs for alignment loss, considering the potential entanglement between normal and anomalous samples in the observed data space, unlike previous methods minimizing the L2 distance between $\\\\mathbf{x}$ and its reconstructed version $\\\\tilde{\\\\mathbf{x}}$ to accurately maintain the observation information, our strategy focuses on maximizing the cosine similarity between $\\\\mathbf{x}$ and $\\\\tilde{\\\\mathbf{x}}$ to align the information of $\\\\mathbf{h}$ with the intrinsic feature correlation of $\\\\mathbf{x}$, while avoiding excessive retention of observational details that may contain entanglement patterns.\\n\\n| | Backdoor | Fault | Imgseg | Lympho | Pendigits | Vowels | Wbc | Average of 40 data |\\n|----------------------------------------------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|--------------------|\\n| Separation w/ L1 distance | 0.8784 | 0.6391 | 0.9185 | 0.8391 | 0.882 | 0.44 | 0.9589 | 0.7007 |\\n| Alignment w/ L1 distance | 0.8868 | 0.6433 | 0.9004 | 0.9762 | 0.9218 | 0.3696 | 0.9401 | 0.7063 |\\n| Separation w/ L2 distance | 0.8786 | 0.6444 | 0.8998 | 0.8900 | 0.8735 | 0.4635 | 0.9655 | 0.7080 |\\n| Alignment w/ L2 distance | 0.8886 | 0.6576 | 0.9125 | 1.0000 | 0.9090 | 0.4425 | 0.9590 | 0.7134 |\\n| **Separation, Alignment w/ Cosine distance (ours)** | **0.8915** | **0.6649** | **0.9238** | **1.0000** | **0.9360** | **0.4506** | **0.9742** | **0.7344** |\", \"q3\": \"Proposition 1 proposes that maximizing variance of weights attached to basis vectors in normal latents enhances separation from abnormal latents. Are there trade-offs in terms of model stability or convergence when applying this approach?\", \"r_to_q3\": \"Thanks for your valuable comment! Regarding loss convergence, the primary objective of DRL is to minimize the decomposition loss (Eq. 3 in the original paper), while the separation loss (Eq. 5 in the original paper) serves as an additional constraint. The separation loss is applied to the normal weights using cosine distance, ensuring that the values remain bounded within a small range. Moreover, the loss weight for the separation loss is set to 0.06 by default, which further constrains its range. Additionally, the weights belong to a probability simplex, which prevents cases where all weights become zero during updates. These mechanisms collectively contribute to stable loss convergence.\\nTo verify this, we have included experimental results in Fig. 18 and 19 in Appendix 10 in the revised version, illustrating the effect of the separation constraint on loss convergence. The results confirm that the separation constraint does not negatively impact convergence.\"}" ] }
CJWMXqAnAy
HyPoGen: Optimization-Biased Hypernetworks for Generalizable Policy Generation
[ "Hanxiang Ren", "Li Sun", "Xulong Wang", "Pei Zhou", "Zewen Wu", "Siyan Dong", "Difan Zou", "Youyi Zheng", "Yanchao Yang" ]
Policy learning through behavior cloning poses significant challenges, particularly when demonstration data is limited. In this work, we present HyPoGen, a novel optimization-biased hypernetwork for policy generation. The proposed hypernetwork learns to synthesize optimal policy parameters solely from task specifications -- without accessing training data -- by modeling policy generation as an approximation of the optimization process executed over a finite number of steps and assuming these specifications serve as a sufficient representation of the demonstration data. By incorporating structural designs that bias the hypernetwork towards optimization, we can improve its generalization capability while only training on source task demonstrations. During the feed-forward prediction pass, the hypernetwork effectively performs an optimization in the latent (compressed) policy space, which is then decoded into policy parameters for action prediction. Experimental results on locomotion and manipulation benchmarks show that HyPoGen significantly outperforms state-of-the-art methods in generating policies for unseen target tasks without any demonstrations, achieving higher success rates and underscoring the potential of optimization-biased hypernetworks in advancing generalizable policy generation. Our code and data are available at: https://github.com/ReNginx/HyPoGen.
[ "hypernetwork", "policy generation", "behavior cloning" ]
Accept (Poster)
https://openreview.net/pdf?id=CJWMXqAnAy
https://openreview.net/forum?id=CJWMXqAnAy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "l9QuorgOok", "kfoTqYlnTy", "jy1ghWsSWJ", "jG4hm65pPK", "UAiusnscVS", "Qc820qhQ3G", "OiHlUF54Iv", "NoMAHBCey7", "KQFWD8BQco", "BGaOKsHDjp", "A92E2B7u6h", "1cQAnNA9HA", "0VFYId3Fyo", "0OaoTukowk" ], "note_type": [ "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1734578499809, 1732208513448, 1730667710742, 1730160470658, 1732194127690, 1732386850653, 1730714998781, 1737523752438, 1732194276916, 1732299060378, 1732194082367, 1730757080228, 1732194090500, 1732194248037 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6231/Area_Chair_XHHa" ], [ "ICLR.cc/2025/Conference/Submission6231/Reviewer_RPpm" ], [ "ICLR.cc/2025/Conference/Submission6231/Reviewer_RPpm" ], [ "ICLR.cc/2025/Conference/Submission6231/Reviewer_2Bw1" ], [ "ICLR.cc/2025/Conference/Submission6231/Authors" ], [ "ICLR.cc/2025/Conference/Submission6231/Reviewer_2Bw1" ], [ "ICLR.cc/2025/Conference/Submission6231/Reviewer_JcCr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6231/Authors" ], [ "ICLR.cc/2025/Conference/Submission6231/Reviewer_JcCr" ], [ "ICLR.cc/2025/Conference/Submission6231/Authors" ], [ "ICLR.cc/2025/Conference/Submission6231/Reviewer_49Ju" ], [ "ICLR.cc/2025/Conference/Submission6231/Authors" ], [ "ICLR.cc/2025/Conference/Submission6231/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents a significant contribution to policy generation through its novel optimization-biased hypernetwork architecture. While reviewers raised concerns about result variances, computational costs, and generalization metrics, the authors provided comprehensive responses with new experiments showing success-only statistics, minimal computational overhead, and strong quantitative generalization results. The authors also conducted thorough ablations examining different architectures and optimization steps, demonstrating the effectiveness of their design choices. With reviewers increasing or maintaining positive scores following the rebuttal, and given the strong experimental validation, this work warrants acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised concerns about result variances, computational costs, generalization metrics, and performance on diverse tasks. The authors addressed these with comprehensive responses including new success-only statistics, detailed cost analysis showing minimal overhead, quantitative generalization results, and new experiments on CartPole and Meta-World. The reviewers were satisfied with these responses. Given the thorough validation and clear technical responses, combined with strong performance and practical importance, this work warrants acceptance.\"}", "{\"comment\": \"Thank you, my questions have been addressed.\"}", "{\"summary\": \"In low-data settings, using a hypernetwork can be promising. However, classical hypernetworks have critical issues, such as overfitting, which can limit their generalization capability. To address this, the authors modify the role of the hypernetwork. Instead of directly predicting the policy weights, they iteratively compute the weights in a manner similar to multi-step gradient descent. This reformulates the optimization problem for the hypernetwork to predict gradients instead of weights directly, which the authors empirically show leads to better performance and generalization on locomotion and manipulation tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This work strengthens hypernetworks by altering the training optimization be guided to output intermediate gradient steps, a novel approach for hypernetworks, especially in policy settings.\\n\\nThe writing and figures are clear, and the experimentation is thorough, with a wide range of baselines that use the same policy network architecture and specification encoder. Performance consistently surpasses that of the baselines.\\n\\nI found the section \\\"Does HyPoGen actually perform optimization?\\\" particularly interesting and insightful that showcases how it is actually like an optimizer.\", \"weaknesses\": \"Overall, I think this work is strong, and I would be interested in seeing additional analysis on how closely the predicted intermediate gradients align with true gradients that would result from directly optimizing the policy. While the authors have shown that the iterative optimization process effectively reduces the loss over time, it would be valuable to quantify the similarity between these predicted gradients and the actual gradients obtained through standard gradient descent. Such an analysis could fortify the claim that the hypernetwork is indeed guided to predict meaningful gradients, rather than simply memorizing an update path. This comparison would strengthen the case for the hypernetwork's ability to emulate true gradient-based optimization, supporting its effectiveness in generalizing to unseen tasks. While this is not a critical weakness, it would serve as a valuable addition that could make the work even more compelling.\", \"questions\": \"I think mentioning learned optimizers : https://arxiv.org/pdf/2209.11208 can be relevant to the overall framing of this work.\\n\\nI also am wondering how well would directly training the hyper network on the true gradients be, such that instead of it resembling back propagation, it is exactly back propagation (assuming no training error) .\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose to incorporate inductive biases about optimization into a hyper-network framework that generates policies for different tasks. They do so by parameterizing the hypernetwork updates based on the chain rule and also on a delta to the current weights. They show that these inductive biases help HyPoGen outperform previous methods when generalizing across a set of related tasks.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The method is presented and motivated well.\", \"The experiments provide compelling evidence that HyPoGen outperforms previous methods (at least based on the average).\", \"The idea of neural gradients for policy generation is interesting. The methods presented in this paper for incorporating optimization inductive biases into hypernetworks could lead to future interesting work.\"], \"weaknesses\": [\"The variances for Table 1 / 2 (reported in Table 13/14) are quite high (on the order of 100-200). This makes it harder to argue that HypoGen is better than the previous methods when the gap is often on the order of a few tens of units. It would be fairer to report exact confidence intervals in the main text to make the comparison more clear.\", \"The computational costs of HyPoGen aren\\u2019t directly addressed. Since it requires multiple steps to keep decreasing the loss, how does the computational cost compare to other methods (especially something like HyperZero)? This makes me think that other baselines could be relevant to assess the tradeoff between computational cost and performance. For example, instead of parametrizing the full chain rule (equation 6/7), can the majority of the benefit be obtained by framing the problem just as a delta from the previous step (equation 5).\", \"While the paper claims that HypoGen leads to better generalization, Fig. 4,5,6,7,8,9 show generalization somewhat qualitatively and it is not super clear (other than Fig. 4) that HyPoGen generalizes better than the other methods. Are their more precise metrics that could show this generalization more clearly (i.e. report performance within one std of training distribution and outside of that for the relevant methods).\", \"The authors present an interesting method, but I think clearer evaluations and details on the computational requirements would make the paper stronger and help isolate the benefit of the optimization inductive bias.\"], \"questions\": [\"What happens if HyPoGen is applied for many more steps (Fig. D)? Does the loss keep going down, or does it \\u201coverfit\\u201d eventually?\", \"What is the strategy for choosing the number of hyper-optimization steps? Do harder problems (more out-of-distribution for example), require more optimization steps?\", \"Does HyPoGen work for larger policy models / harder environments (the policy networks at the moment seem to be only 2 layer MLPs)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer **RPpm**:\\n\\nThank you for the thoughtful feedback and for recognizing the novelty, clear presentation, and comprehensive experiments of our work. The majority of your comments revolve around the comparison between the neural gradient and the actual gradient, as well as exploring the performance of our model when trained on ground truth gradients. Next, we address these points as follows:\\n\\n**W1:** We calculated the cosine similarity between the neural gradients and the corresponding ground truth gradients. A value closer to 1 indicates that the two gradients are more similar in direction. The similarity of parameters after each update is shown in the following. Empirically, we found the similarities to be around 0.366. This suggests that while the neural gradients positively correlated in direction with the ground truth gradients, they differ significantly in actual values. This implies that the neural gradients capture the essence of the exact gradients but are far more effective in updating the parameters given that the neural update process has only 8 steps.\\n\\n| #upd | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |\\n| -------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\\n| cos similarity | 0.3424 | 0.3662 | 0.3939 | 0.3712 | 0.4116 | 0.3566 | 0.3960 | 0.2888 |\\n\\n**W2:** We have incorporated a discussion of learned optimizers, including the work you mentioned (https://arxiv.org/pdf/2209.11208), in **Line 155, Related Works** section to provide additional context.\", \"regarding_the_question_about_directly_training_the_hypernetwork_on_true_gradients_so_that_it_explicitly_mimics_backpropagation\": \"We conducted experiments in the cheetah environment to explore this idea. Specifically, we used PyTorch's autograd function to compute the ground truth gradients of \\u03b8 after each update and supervised the hypernetwork\\u2019s output with an L2 loss. The results are summarized in the table below. As shown, the HyPoGen model trained on ground truth gradients performed poorly in the cheetah environment. This aligns with our discussion in Sections 4.2 and 5.4, where we mentioned that the ground truth gradients are highly data-dependent and noisy, making them unsuitable for effective training.\\n\\n| | Reward |\\n| ------------------------------- | ------ |\\n| HyPoGen trained on GT gradients | 115.86 |\\n| HyPoGen trained end-to-end | 819.23 |\\n\\nWe appreciate your detailed feedback and believe that these additions enhance the rigor and comprehensiveness of our work. We look forward to your response and further discussions.\"}", "{\"comment\": \"Thank you for the detailed response from the authors. My main points have been addressed, and I raise my score. Nevertheless, I think that results such as the variance should be reported in the main text, as this is standard with many RL works, and makes it easier for the reader.\"}", "{\"summary\": \"This paper proposes a novel hypernetwork architecture for policy generation in behavior cloning, with a focus on generalization to unseen tasks without requiring extra expert demonstrations. Unlike existing methods that apply hypernetworks to generate policy network parameters directly from a task specification, the proposed hypernetwork architecture, HyPoGen, iteratively generates a policy network by simulating gradient-based optimization, where the pseudo \\\"gradients\\\" are conditioned on the task specification. Additionally, the generation of these pseudo \\\"gradients\\\" across different layers of the policy network is structured to follow the chain rule. The primary idea is that the inductive biases introduced by simulating gradient-based optimization can help improve policy generation quality and improve generalization to unseen tasks. Experimental results show that HyPoGen outperforms baseline methods in generating policies for unseen tasks without any demonstration and that the policy performance improves during the simulated optimization guided by the trained hypernetwork.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper addresses the challenge of generalizing policy generation to unseen tasks without any demonstration, a realistic and difficult problem in imitation learning.\\n\\n2. The main contribution of the paper is the novel hypernetwork architecture, HyPoGen, which introduces inductive biases to generate a policy network by simulating gradient-based optimization. The architecture performs iterative pseudo \\\"gradient\\\"-descent where the pesudo \\\"gradients\\\" are enforced to follow chain rule across different layers of the policy netowrk, an interesting and sensible approach.\\n\\n3. It is demonstrated through extensive experiments on locomotion and manipulation benchmarks that HyPoGen outperforms baseline methods in generalizing to unseen tasks without any demonstration. Additionally, several empirical cases studies (notably in Table 4) provide evidence that the proposed hypernetwork architecture effectively simulates an iterative optimization process. \\n\\n4. The paper is well organized and clearly written, with a thorough discussion of related work to place it within existing research. The rationale behind the HyPoGen architecture design is clearly explained, and the construction of the hypernetwork's inner components is presented effectively.\", \"weaknesses\": \"1. The training process of the proposed hypernetwork is described somewhat briefly in Section 4.2. Although the set of learnable parameters and the loss function (Eq. 1) are discussed, likely implying end-to-end training over source tasks to minimize the BC loss (Eq. 1) while optimizing all learnable parameters simultaneously, this part could be elaborated more explicitly. Including pseudocode in the appendix might be helpful if the training procedure is not this straightforward.\\n\\n2. In Section 5.4, there is an analysis of the statistics of the generated policy network parameters with different initial values of $\\\\theta_0$ (Table 3). Are other hypernetwork parameters, apart from $\\\\theta_0$, kept fixed across different initial values of $\\\\theta_0$, or are they retrained separately for each initial value of $\\\\theta_0$? In either case, the presented statistics in Table 3 might not be able to sufficiently answer the question whether HyPoGen remembers a fixed set of parameters for each specification. Specifically, if the other hypernetwork parameters are not retrained, observing the statistics of generated policy network parameters may be less meaningful as the hypernetwork is not adapting to the $\\\\theta_0$ value in use; if the hypernetwork is retrained for different initial values of $\\\\theta_0$, it could still be the case that HyPoGen remembers a fixed set of parameters for each specification for a given $\\\\theta_0$.\", \"questions\": \"1. Is my understanding of the training procedure of the proposed hypernetwork correct, i.e. are all trainable parameters of the hypernetwork, including $\\\\theta_0$, trained simultaneously in an end-to-end fashion?\\n\\n2. Are any parts of the hypernetwork parameters retrained for each initial value of $\\\\theta_0$ when performing the analysis of Table 3? See Weakness 2 above for further context of this question.\\n\\n3. Appendix B.1 provides the range, granularity, and number of samples of tasks specifications in different environments used in the experiments. Is task specification sampling done uniformly? Is the source-test task partition also uniform? If so, for the cases where only one task parameter varies, could there be very similar specifications among the source tasks for most test tasks? For example, Table 6 shows that the Cheetah environment has about 40 possible specifications, and there are 40 samples, so nearly every specification appears in either source or target tasks (assuming sampling without replacement).\\n\\n4. It is mentioned that the optimization is performed in a compressed latent space. What is the dimension of this latent space?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"**W3:** Thank you for your feedback on the generalization claims and figures. The plotting format in Figures 5\\u20139 follows the conventions of previous works like HyperZero. However, we understand your concern that these qualitative figures may not clearly demonstrate generalization performance.\\n\\nTo better address this, we report the mean rewards for MuJoCo tasks, as well as the success rate and episode length for ManiSkill tasks, under the suggested two conditions: **\\\"within one standard deviation of the training distribution\\\"** and **\\\"outside one standard deviation of the training distribution\\\"**. These metrics are now presented in Table 26, 27, 28, and 29 of the revised manuscript.\\n\\nThe updated results show that HyPoGen outperforms all relevant methods in both in-distribution and out-of-distribution settings across most tasks. This provides stronger evidence of the proposed method's superior generalization ability compared to existing baselines. We hope these additional results clarify our claims and provide a more robust demonstration of HyPoGen\\u2019s generalization capabilities.\\n\\n\\n\\n**Q1:** We conducted experiments with varying numbers of optimization steps K, with the results presented in Table 20 in the supplementary material. On the cheetah task, the optimal value for K is 8. As K increases beyond this point, the final rewards decrease, indicating potential overfitting.\\n\\n\\n\\n**Q2:** The number of hyper-optimization steps (K) is chosen based on experimental results shown in Table 20. We choose K = 8, which provides the best performance, and used this value across all tasks. However, this value can be fine-tuned depending on the specific task or dataset.\\n\\nTo evaluate how the optimal number of steps varies in a more out-of-distribution scenario, we conducted additional experiments using only 10% of the training specifications (compared to 20% in the paper). The results, presented in the table below, show that the best performance was achieved with K = 5, which is smaller than the optimal K for tasks with more training data. This suggests that smaller datasets are more prone to overfitting, necessitating fewer optimization steps to mitigate this issue.\\n\\n| Layer | 1 | 3 | 5 | 8 | 10 | 20 |\\n| ------------ | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |\\n| Reward \\u00b1 Std | 554.70 \\u00b1 31.79 | 527.25 \\u00b1 38.95 | 575.05 \\u00b1 55.16 | 517.71 \\u00b1 22.09 | 511.09 \\u00b1 22.93 | 544.41 \\u00b1 30.35 |\\n\\n\\n\\n**Q3:** Yes, HyPoGen works for larger policy models and harder environments. We experiment with various sizes of policy networks, as detailed in Table 18 of the supplementary material. The best-performing configuration is a 3-layer policy network. As the number of layers and parameters increases, the learning difficulty of the hypernetwork also increases, which results in decreased performance. Nevertheless, HyPoGen consistently outperforms HyperZero in terms of rewards.\\n\\nWe appreciate your insightful feedback. We hope the additional variance analysis, computational cost comparisons, clearer generalization metrics, and experiments on optimization steps and larger models help address your concerns. Please feel free to let us know if you have more questions that can help improve the final rating of our work.\"}", "{\"comment\": \"Thank you to the authors for the response. My questions have been overall addressed, and the clarification provided has improved my understanding of the results in Section 5.4.\"}", "{\"comment\": \"Dear Reviewer **49Ju**:\\n\\nThanks for your valuable feedback. \\n\\nWe sincerely appreciate your positive remarks on our paper's presentation and the recognition of significant performance improvement of our method. Since the major concerns are about more extended experiments for more diverse settings and additional variance results, we will address your inquiries and concerns point by point in the following responses.\\n\\n\\n\\n**Q1:** Thank you for the insightful question. Theoretically, our hypernetwork design can accommodate both discrete and continuous action spaces. This flexibility is achieved by using one-hot encoding to convert discrete action predictions into a continuous format, making the underlying network operations similar for both action types.\\n\\nThen the main distinction lies in the training objective, specifically the loss function used. For continuous action spaces, as outlined in Eq. 3, we use an L2 loss function. For discrete action spaces, however, we employ CrossEntropy loss. To provide more clarity, we compare the gradients of the two loss functions below:\\n\\n- **Gradient of CrossEntropy Loss (CE):** $\\\\frac{\\u2202L}{\\u2202z} = \\u03c3(z) - y$\\n- **Gradient of L2 Loss:** $\\\\frac{\\u2202L}{\\u2202z} = 2 (z - a)$\\n\\nwhere $\\\\sigma$ represents the softmax function, and $y$ is the one-hot label of action $a$.\\n\\nTo further support our claim, we conducted experiments on the CartPole task within the MuJoCo environment. In this task, the objective is to maintain the balance of a standing pole by choosing between pushing the cart either left or right. We varied the pole length from 0.3 to 1.2 and compared the performance of our model against HyperZero. The experimental results are presented in the following.\\n\\n| method | mean |\\n| --------- | ------------ |\\n| HyperZero | 627.49\\u00b140.82 |\\n| HyPoGen | 695.48\\u00b121.12 |\\n\\nThe results above align with our theoretical analysis, demonstrating that our Hypernet outperforms traditional hypernetworks and effectively handles discrete action spaces without any issue.\\n\\n\\n\\n**Q2**: Thank you for pointing this out. While the tasks may appear slightly modified at first glance, they are more difficult than they seem. As shown in Table 2 of the main paper, all baseline methods struggle with the ManiSkill tasks, even though these tasks differ by only a single parameter. Transferring between them proves to be quite challenging.\\n\\nTo further analyze the limitations of our methods, we conducted additional experiments in the Meta-World environment. Specifically, we trained our hypernetworks on three Meta-World tasks related to buttons: \\\"button-press,\\\" \\\"button-press-topdown,\\\" and \\\"button-press-wall.\\\" We then evaluated our method on a novel task, \\\"button-press-topdown-wall.\\\" For all these tasks, we used instruction embeddings from a CLIP-text encoder as the condition for the hypernetworks.\\n\\nThe table below reports the average episode return over 250 episodes.\\n\\n| Method | Episode Return |\\n| --------- | -------------- |\\n| HyperZero | 726.61\\u00b170.15 |\\n| HyPoGen | 1138.71\\u00b1325.54 |\\n\\nFrom these results, we can observe that policy networks generated by HyPoGen is much better than HyperZero, which demonstrate the generalization capability of our approach to more distinct tasks in a complex environment like Meta-World.\\n\\n\\n\\n**Q3**: Thank you for your valuable suggestion. In addition to reporting the average success rate, we now also report the variance of the success rate in the **Table 23** of the revised manuscript. We can observe that the variance of our proposed are comparable with previous methods. This additional information helps to better understand the stability and robustness of our approach. \\n\\n\\n\\nTo recap, we demonstrated that our approach can effectively handle discrete action spaces, showing superior performance in the CartPole task compared to HyperZero. Additionally, we provided further experiments in the Meta-World environment, demonstrating the generalization capability of HyPoGen to more distinct tasks. Lastly, we included the variance of success rates to highlight the stability and robustness of our approach compared to other methods.\\n\\nThank you again for your constructive feedback. We hope these additional experiments and analyses address your concerns and help improve the final rating of our work. Please feel free to let us know if you have any further questions.\"}", "{\"summary\": \"The paper introduces HyPoGen, a hypernetwork that generates policies for unseen tasks without demonstrations by learning policy parameters directly from task specifications. By structuring the network to mimic optimization, HyPoGen generalizes effectively despite limited training data. Experiments show it outperforms state-of-the-art methods on locomotion and manipulation tasks, achieving higher success rates in unseen scenarios. The authors will release the code and models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper was well-written.\", \"The experimental results show that performance improvement is significant.\"], \"weaknesses\": [\"In general, I am curious about the limitation of the proposed method. Please refer to the questions section.\"], \"questions\": [\"Could the proposed approach work in discrete action space?\", \"Is there any potential difficulties or benefits of applying this approach for discrete actions?\", \"In the experiments section, it seems that the generated policy was adapted in the scenario in which task is slightly modified, such as the desired speed of the agent changed. Can authors provide more experiments that policy is generated for more distinct tasks, such as meta-world?\", \"Though the authors provide average success rate, it would be also important to show the variance of the success rate.\", \"[1] Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning, Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Avnish Narayan, Hayden Shively, Adithya Bellathur, Karol Hausman, Chelsea Finn, Sergey Levine\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer **JcCr**:\\n\\nThank you for the constructive feedback. \\n\\nWe are grateful for your recognition of the novelty of our hypernetwork architecture and the effectiveness of our design choices. Your acknowledgment of our comprehensive experiment results and well-structured presentation is also highly appreciated. Next, we provide the required clarifications regarding the training process and data distribution and more detailed explanations in the following.\\n\\n\\n\\n**W1 & Q1:** We apologize for the confusion regarding the training process. The training process is indeed end-to-end, and we have made this clearer in **Line 339, Sec. 4** of the revised version.\\n\\n\\n\\n**W2 & Q2:** Thank you for raising this valuable question. For your first question, in our experiments, the other hypernetwork parameters were kept fixed and were not retrained when varying the initial value of $\\\\theta_0$. Regarding your follow-up question, we would like to clarify that the learnable initial weight $\\\\theta_0$ is designed to better adapt in conjunction with the trained hypernetworks. Even though the hypernetworks can perform gradient descent with a random initial $\\\\theta_0$ without retraining, the BC loss associated with our learnable $\\\\theta_0$ is significantly lower compared to a randomized $\\\\theta_0$.\\n\\nTo illustrate this, we present the average BC loss after each update for randomized $\\\\theta_0$ and learnable $\\\\theta_0$ for the Cheetah task below.\\n\\n| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |\\n| -------------------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n| Learnable $\\\\theta_0$ | 71.4909 | 61.1337 | 46.3077 | 40.6435 | 21.948 | 11.2149 | 3.197 | 1.6058 |\\n| Rand $\\\\theta_0$ | 82.8256 | 77.7552 | 67.3428 | 62.6156 | 45.8644 | 33.4012 | 21.0709 | 14.5684 |\\n\\nThis result demonstrates that our model achieves optimal performance with the trained learnable $\\\\theta_0$ while also exhibiting the ability to properly update random initial parameters.\\n\\n\\n\\n**Q3:** We apologize for any confusion. The task specification does not involve random sampling but instead uses evenly spaced values across the range. For instance, in the Cheetah task, all the specifications used during training and testing are defined as `np.linspace(-10, -0.5, 20) + np.linspace(0.5, 10, 20)`. \\n\\nRegarding the source-task partition, we uniformly sample only 20% of the specifications to use as training tasks, with the remaining 80% serving as testing specifications (as detailed in the \\u201cData Collection and Evaluation Protocols\\u201d section of the main text). Thus, for the Cheetah task, we used only 8 specifications for training, which is quite sparse compared to the 32 test specifications.\\n\\nWhile it is possible for some test tasks to have similar source tasks, this is not the case for the majority of test tasks due to the limited portion of the training sets.\\n\\n\\n\\n**Q4:** The latent dimension is 256 across all experiments.\\n\\nOnce again, thanks for your constructive feedback. Please let us know if there are more comments that help improve the quality of our paper.\"}", "{\"comment\": \"Dear Reviewer **2Bw1:**\\n\\nThank you for the detailed feedback. We are pleased to hear that you found the method well-presented and motivated, the experimental results compelling, and the concept of neural gradients for policy generation interesting. We will address your inquiries and concerns on variance analysis, computational trade-offs, and generalization metrics point by point in the following responses.\\n\\n\\n\\n**W1:** Thank you for pointing this out, the high variance in our results is due to the inclusion of failed episodes in the calculations, which is an established practice in previous work such as HyperZero. These failed episodes typically have low rewards in the MuJoCo environments or high episode lengths in the ManiSkill environments (around 200 episodes compared to the usual 10\\u201350 for successful episodes), which significantly increase the overall variance.\\n\\nTo address this, we have updated our results to report the standard deviations based only on success episodes. Please refer to Table 24 and 25 in the updated manuscript.\\n\\nFurthermore, we calculated the probability that HypoGen surpasses all other baselines using formulas from[ this resource](https://stats.stackexchange.com/questions/359768/probability-of-a-difference-between-two-sampling-means-of-two-populations), based on the sample sizes of 50 for MuJoCo and 100 for ManiSkill. These probabilities, presented separately for MuJoCo and ManiSkill, clearly show that HypoGen outperforms all baselines in most cases.\\n\\nWe hope this clarification addresses your concern regarding the comparison with previous methods.\\n\\n| | cheetah-speed | cheetah- length | cheetah- speed&length | finger- speed | finger-length | finger-speed&length | walker-speed | walker-length | walker-speed&length |\\n| ----------------- | ------------- | --------------- | --------------------- | ------------- | ------------- | ------------------- | ------------ | ------------- | ------------------- |\\n| P(HyPoGen is top) | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |\\n\\n| Method | LiftCube-cube | LiftCube-stiff | LiftCube-damp | LiftCube-length | Pick&Place-cube | Pick&Place-stiff | Pick&Place-damp | Pick&Place-length |\\n| ----------------- | ------------- | -------------- | ------------- | --------------- | --------------- | ---------------- | --------------- | ----------------- |\\n| P(HyPoGen is top) | 0.086 | 1.000 | 1.000 | 0.946 | 0.719 | 0.999 | 0.855 | 0.441 |\\n\\n**W2:** Thank you for raising the important point about computational costs and the trade-off between performance and efficiency. Below, we provide a detailed comparison of computational time between HyPoGen and HyperZero, along with additional experiments to analyze the benefits of parametrizing the full chain rule.\\n\\nThe exact computation costs of the proposed methods and HyperZero are listed below, \\n\\n| | WeightGen Time | Rollout Time x 1000 steps |\\n| --------- | -------------- | ------------------------- |\\n| HyperZero | 0.3ms | 66.95ms |\\n| HyPoGen | 11.62ms | 66.95ms |\\n\\nWhile the proposed HyPoGen is slower in the weight generation process, this process only accounts for a small fraction of the total computation time, as the majority of the time is spent on rollout trajectories, which are identical across methods. Furthermore, the policy networks generated by HyPoGen can be reused across multiple trajectory rollouts, making the additional weight generation time non-critical in practice.\\n\\nTo address your suggestion about simplifying the hypernetwork by replacing the full-chain rule modeling (Equation 6/7) with a black-box approach (Equation 5), such as a simple MLP, we conducted experiments on the cheetah task. The results are summarized below:\\n\\n| Method | Rewards | WeightGen Time |\\n| ------------------------------- | -------------- | -------------- |\\n| HyPoGen with MLP Hypernet block | 787.33 \\u00b1 87.43 | 6.99ms |\\n| HyPoGen (x3 updates) | 775.55 \\u00b1 68.26 | 4.97ms |\\n| HyPoGen (x8 updates) | 856.88 \\u00b1 61.73 | 11.62ms |\\n\\nThese results demonstrate that replacing the proposed hypernetwork with a simple MLP improves performance over HyperZero, indicating the utility of treating the update as a delta from the previous step. However, explicitly modeling the full chain rule with the proposed hypernetwork leads to further improvements\\n\\nIn terms of computational cost, the black-box hypernetwork achieves performance comparable to a three-layer HyPoGen but is slower in weight generation. This observation underscores the efficiency of parametrizing the full chain rule.\\n\\nWe hope this analysis addresses your concerns and provides clarity on the trade-offs involved.\"}" ] }
CJEBFNBLhO
Massively Parallel Environments for Large-Scale Combinatorial Optimizations Using Reinforcement Learning
[ "Ming Zhu", "Xiao-Yang Liu" ]
Most combinatorial optimization (CO) problems are NP-hard and difficult to find high-quality solutions. Reinforcement learning (RL) is a promising technique due to its powerful search capability; however, sampling speed is a common bottleneck. Current benchmark works only provide instance-wise approaches, while our work cover both instance-wise and distribution-wise approaches, especially in large-scale CO problems. In this paper, we build 24 GPU-based massively parallel environments for 12 CO problems, i.e., each problem has two environments; and use them to train RL-based approaches. We reproduce benchmark RL algorithms, including instance-wise and distribution-wise approaches especially in large-scale CO problems, on both synthetic datasets and real-world datasets. Take the graph maxcut problem as an example. The sampling speed is improved by at least two orders over conventional implementations, and the scale (i.e., number of nodes) of trained problems in a distribution-wise approach is up to thousands of nodes, i.e., improved by one order. The objective value obtained by inference (100 $\sim$ 200 seconds) in the distribution-wise scenario is almost the same as the state-of-the-art (SOTA) solver Gurobi (running for 1 hour), and better than the SOTA RL-based approach. The code is available at: https://github.com/OpenAfterReview.
[ "Combinatorial Optimizations", "Massively Parallel Environments", "Reinforcement learning", "distribution-wise approach" ]
https://openreview.net/pdf?id=CJEBFNBLhO
https://openreview.net/forum?id=CJEBFNBLhO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xhzGQRO9FU", "mVS9S9Ojo8", "HZbzPzBgPQ", "Cm9cBcQ6Q2", "BxcnTKE2nc", "7wjz2DCLOM" ], "note_type": [ "comment", "official_review", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1732449580263, 1729756519720, 1732204314712, 1730721408734, 1730697966862, 1730612948273 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission381/Authors" ], [ "ICLR.cc/2025/Conference/Submission381/Reviewer_Mqub" ], [ "ICLR.cc/2025/Conference/Submission381/Area_Chair_bsP1" ], [ "ICLR.cc/2025/Conference/Submission381/Reviewer_fge5" ], [ "ICLR.cc/2025/Conference/Submission381/Reviewer_wFdk" ], [ "ICLR.cc/2025/Conference/Submission381/Reviewer_2fGJ" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces a novel approach to massively parallel environments for large-scale combinatorial optimization (CO) problems using reinforcement learning (RL). The authors identify sampling speed as a significant bottleneck in applying RL to CO problems and propose a solution using GPU-based massively parallel environments. They argue that this approach offers several advantages over traditional CPU-based methods, including increased parallelism, reduced communication overhead between CPUs and GPUs, and the ability to train RL-based methods on large-scale CO problems.\\n\\nThe effectiveness of their approach is demonstrated by constructing 24 GPU-based massively parallel environments for 12 CO problems, which are used to train RL-based methods. They reproduce benchmark RL algorithms, including both instance-wise and distribution-wise approaches, and test them on synthetic and real-world datasets. The results show significant improvements in both sampling speed and training efficiency, with the capability to train on problems involving thousands of nodes.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper's strength lies in its use of GPU-based massively parallel environments to accelerate the sampling process for large-scale combinatorial optimization (CO) problems using reinforcement learning (RL) algorithms. By leveraging GPUs, the authors achieve a significant speedup in sampling compared to conventional CPU-based methods, enabling the training of RL agents on larger CO problems than was previously possible.\", \"weaknesses\": \"Although the paper addresses 12 combinatorial optimization (CO) problems, it essentially only focuses on the MaxCut problem. While the appendix provides formulations of other CO problems in ILP and QUBO, the comparisons with other solvers are primarily conducted for the MaxCut problem. Therefore, the contribution of this paper to the CO field is limited.\\n\\nThe authors claim that most CO problems can be formulated in QUBO under Pattern II, which allows for a wider range of applications. However, when CO problems with complex and numerous constraints are formulated in QUBO, parameters must be assigned to each constraint and incorporated into the objective function. Setting these parameters appropriately is challenging, often making it difficult to even find a feasible solution, let alone the optimal one. Therefore, except for simple CO problems like MaxCut, which have no constraints, it cannot be said that QUBO formulations are effective for general CO problems.\\n\\nFor MaxCut and QUBO, the following are high-performance solvers that generate good solutions quickly, even for practical-sized problems. Thus, while it may be difficult to directly compare their performance, some mention of the following should be included in the paper: \\nDaniel Rehfeldt, Thorsten Koch, and Yuji Shinano, Faster Exact Solution of Sparse MaxCut and QUBO Problem, Mathematical Programming Computation (MPC), Volume 15, pages 445\\u2013470, (2023).\\n\\nThere is little description of the implementation on GPUs and the computational performance of the GPUs. This raises questions about whether the use of 24 expensive NVIDIA A100 GPUs is essential, or if similar performance could be achieved with fewer GPUs through optimizations in GPU implementation. Moreover, when comparing with Gurobi, it is difficult to make a fair evaluation since the computing environments differ significantly (Gurobi does not use GPUs).\\n\\nIn summary, my opinions are as follows:\", \"1\": \"While creating a high-performance solver for the MaxCut problem is meaningful, the performance for other CO problems is unclear, limiting its contribution to the optimization field.\", \"2\": \"The MaxCut problem is a very simple CO without any constraints. Other CO problems, however, are more complex and have numerous constraints, making it difficult to even find a feasible solution when converting them into QUBO form. Therefore, there is a substantial gap between being able to formulate a problem as QUBO and actually solving it as QUBO.\", \"3\": \"If the MaxCut problem is to be addressed, it should be compared not only with Gurobi but also with other methods (exact and approximate solutions).\", \"4\": \"It is necessary to discuss whether 24 GPUs are genuinely required or if similar results could be achieved with fewer computational resources. Since Gurobi uses only CPUs, the computational resources differ in this study\\u2019s comparison experiments, and it cannot simply be concluded that the proposed method is superior.\", \"questions\": \"Please provide any counterarguments or additional points regarding the weaknesses mentioned above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"No author response yet\", \"comment\": \"Dear Submission381 Authors,\\n\\nICLR encourages authors and reviewers to engage in asynchronous discussion up to the 26th Nov deadline. It would be good if you can post your responses to the reviews soon.\"}", "{\"summary\": \"This paper proposes a new RL environment for large-scale combinatorial optimization problems, which implementes GPU-accelerated parrallel sampling, and implements both instance-wise and distribution-wise approaches.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This work builds 24 enviornments for 12 different CO problems, including many commonly-used benchmarking problems.\\n1. The implemented enviornment can effectively improve the training speed of RL approaches by accelerating the GPU-based sampling.\", \"weaknesses\": \"1. Discussions on the previous RL enviornment [Ecole](https://doc.ecole.ai/py/en/stable/index.html), which is a usually used enviornment for MILPs, is missing.\\n1. This work mainly focuses on CO problems on graphs, rather than general MILP or QUBO problems, which limits the application scope.\\n1. The considered two patterns mainly involve approaches those predicting scores for each node on a graph. However, some other approaches, such as cut sselections, are not considered.\\n1. It is not introduced how can a user define a new enviornment with a new CO problem not included. I think it is importent that a user can easily transfer this enviornment to a new problem.\\n1. The citations are all in a wrong form, \\\"Author (Year)\\\" rather than \\\"(Author, Year)\\\".\\n1. I think this is a valuable work from the perspective of engineering implementation. However, its technical novelty as a research paper is still not very significant. The diffuculty in implementing such a parallel system is not demonstrated enough.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper explores the use of Reinforcement Learning (RL) for Combinatorial Optimization (CO) and identifies sampling speed as a major bottleneck in applying RL to CO problems. To address this, the authors propose leveraging GPUs to improve sampling efficiency. The paper contributes to the field by parallelizing both instance-wise and distribution-wise approaches and implementing two RL-based algorithms for CO. The authors create 24 GPU-based \\\"massively parallel\\\" environments to tackle 12 CO problems, providing benchmarks on both real and synthetic datasets. The results show a significant increase in sampling speed, improving solution discovery rates by orders of magnitude. Empirically, the solutions obtained are accurate in terms of objective value and closely match those produced by state-of-the-art (SOTA) commercial solvers.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. The paper identifies sampling speed as a bottleneck in applying reinforcement learning (RL) to combinatorial optimization (CO) and proposes massively parallel environments with two distinct \\\"patterns\\\" for implementing RL-based algorithms.\\n\\nS2. According to the authors, this is the first work to introduce massively parallel RL environments for the distribution-wise approach, which broadens the applicability of the methodology to a wider range of CO problems.\\n\\nS3. The paper experimentally demonstrates the flexibility of the Pattern I and II approaches by implementing 11 different algorithms, providing an extensive comparison of their performance across various methods.\", \"weaknesses\": \"W1. The proposal of massively parallel environments for reinforcement learning (RL) is not novel, as similar approaches have been explored in prior work [1, 2, 3, 4]. Additionally, the \\\"patterns\\\" discussed for combinatorial optimization (CO) environments have previously been covered in the literature. This raises questions about the exact technical contributions of the paper, which require a clearer and more detailed explanation. The authors claim to extend prior methods by incorporating a distribution-wise approach, yet there is insufficient experimental evidence to demonstrate the approach's effectiveness across different CO problems.\\n\\nW2. The novelty of the work is unclear. The paper does not adequately explain how these GPU-based massively parallel environments were constructed or how the parallelization of different patterns was achieved. Furthermore, the motivation behind implementing Pattern II, the challenges encountered, and the reasons prior work could not address these challenges are not well highlighted. The results obtained seem to be an artifact of using GPU, but details (if they exist) about the technical advancements made to deploy the algorithms on the GPU have a lack of detail.\\n\\nW3. The paper describes two patterns, I and II, with Pattern II reportedly providing higher solution quality at the cost of slower processing due to more complex sampling methods (What are they, I have also asked that in the question section). This trade-off may limit the speed advantages of the approach when high-quality solutions are prioritized. It would be beneficial to present objective values versus time for both patterns, preferably within a single plot for direct comparison.\", \"citations\": \"[1] Nair, A., Srinivasan, P., Blackwell, S., Alcicek, C., Fearon, R., De Maria, A., ... & Silver, D. (2015). Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296.\\n\\n[2] Clemente, A. V., Castej\\u00f3n, H. N., & Chandra, A. (2017). Efficient parallel methods for deep reinforcement learning. arXiv preprint arXiv:1705.04862.\\n\\n[3] Khalil, E., Dai, H., Zhang, Y., Dilkina, B., & Song, L. (2017). Learning combinatorial optimization algorithms over graphs. Advances in neural information processing systems, 30.\\n\\n[4] Berto, F., Hua, C., Park, J., Luttmann, L., Ma, Y., Bu, F., ... & Park, J. (2023). Rl4co: an extensive reinforcement learning for combinatorial optimization benchmark. arXiv preprint arXiv:2306.17100.\", \"questions\": \"Major Questions:\\n\\nQ1. \\u201cMoreover, we use several tricks to improve the quality of solutions, e.g., sampling algorithms.\\u201d.\\nWhat tricks are employed to improve the quality of solutions?\\n\\nQ2. \\u201cMoreover, from experiments, we see that the methods in Pattern II are generally better than that in Pattern I.\\u201d Do you anticipate it to work on almost all the CO problems? Do you have any theoretical insights/results to back this claim?\\n\\nQ3. Where can I find results for the other 12 CO problems? I only see results for max-cut with different datasets.\\n\\nQ4. What algorithm does Gurobi specifically use to solve CO problems used in your benchmarks?\\n\\nQ5. Is \\u201cdREINFORCE\\u201d the method that was contributed in this case? I saw it was linked with external citations, so I am not sure if it was borrowed as a massively parallel environment framework to implement Pattern II. Please clarify.\\n\\nQ6. Can you summarize the novel discoveries and insights that this paper presents?\", \"minor_issues\": \"1. The paper does not have line numbers in the left margin.\\n\\n2. The header contains \\u201cUnder Review as a conference paper at ICLR 2024\\u201d, which should be \\u201c...ICLR 2025\\u201d.\\n\\n3. I suggest using \\\\citet for in-text citations (using the citation in a sentence) and \\\\citep for parenthetical citations, i.e., where you just want the citation as a reference.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper build 24 GPU-based massively parallel environments for 12 CO problems. Specifically, the authors reproduce benchmark RL algorithms, including instance-wise and distribution-wise approaches especially in large-scale CO problems, on both synthetic datasets and real-world datasets. Experiments demonstrate that the benchmark significantly improves the sampling speed, the scale of trained problems, and the objective value.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tDeveloping a comprehensive combinatorial optimization benchmark for RL-based approaches is important for the community. Specifically, this paper first build 24 GPU-based massively parallel environments for 12 CO problems, and then reproduce benchmark RL algorithms, including instance-wise and distribution-wise approaches on both synthetic datasets and real-world datasets.\\n2.\\tExperiments demonstrate that the benchmark significantly improves the sampling speed, the scale of trained problems, and the objective value. Specifically, the proposed benchmark improves the sampling speed by two orders, the objective value obtained under certain cases is on par with the state-of-the-art solver Gurobi, which is the golden solver across the world.\", \"weaknesses\": \"1.\\tThe technical contribution of the benchmark seems limited. Specifically, this paper build 24 GPU-based massively parallel environments for 12 CO problems, and reproduce benchmark existing RL algorithms on both synthetic datasets and real-world datasets. It would be more valuable if the authors could propose some new RL algorithms based on their extensive benchmark results.\\n2.\\tAlthough the benchmark enhances the scale of trained problems by an order of magnitude to thousands of nodes, it still remains relatively small compared to real-world industrial problems with tens of thousands of nodes.\\n3.\\tThe authors claim that they evaluate the RL algorithms on real-world datasets. However, I do not find the description of real-world datasets in Section 5. It would be more convincing if the authors could clarify what real-world datasets they use.\", \"questions\": \"Please refer to weaknesses for details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
CIs9x2ZRgh
CR-CTC: Consistency regularization on CTC for improved speech recognition
[ "Zengwei Yao", "Wei Kang", "Xiaoyu Yang", "Fangjun Kuang", "Liyong Guo", "Han Zhu", "Zengrui Jin", "Zhaoqing Li", "Long Lin", "Daniel Povey" ]
Connectionist Temporal Classification (CTC) is a widely used method for automatic speech recognition (ASR), renowned for its simplicity and computational efficiency. However, it often falls short in recognition performance. In this work, we propose the Consistency-Regularized CTC (CR-CTC), which enforces consistency between two CTC distributions obtained from different augmented views of the input speech mel-spectrogram. We provide in-depth insights into its essential behaviors from three perspectives: 1) it conducts self-distillation between random pairs of sub-models that process different augmented views; 2) it learns contextual representation through masked prediction for positions within time-masked regions, especially when we increase the amount of time masking; 3) it suppresses the extremely peaky CTC distributions, thereby reducing overfitting and improving the generalization ability. Extensive experiments on LibriSpeech, Aishell-1, and GigaSpeech datasets demonstrate the effectiveness of our CR-CTC. It significantly improves the CTC performance, achieving state-of-the-art results comparable to those attained by transducer or systems combining CTC and attention-based encoder-decoder (CTC/AED). We release our code at \url{https://github.com/k2-fsa/icefall}.
[ "Consistency regularization", "CTC", "speech recognition" ]
Accept (Poster)
https://openreview.net/pdf?id=CIs9x2ZRgh
https://openreview.net/forum?id=CIs9x2ZRgh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yvp0nQaype", "svnD0Bn96A", "s2Z8l134xk", "o7WS5kX8I1", "np9MOOilcX", "nSWK49sGGT", "mWNjKAYuiO", "lyO8ed0Qwn", "l54dBLPI5i", "kKpraRCWmH", "jbhImVfiG9", "iWzJP3GVvH", "h6hEIY0wem", "f47RDzbNtH", "dRD5GZir0p", "abBQEebL3A", "X06ILVXrQ8", "WJKr2FAZEn", "R6UQ05uf0G", "QEls1bDN3A", "JRUw9idUer", "DMCPIEEMcO", "CqQAgod9AX", "C1UNIJoTxv", "Aun6WGA1Hb", "1ldmLWWKNn" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732500954753, 1733138804361, 1732488771606, 1732463499627, 1732419701192, 1732503588569, 1730661675236, 1732206268560, 1732423017097, 1732254384204, 1732462423234, 1730604593857, 1734635435359, 1733302681585, 1730418844991, 1732599066162, 1732252825532, 1732609217663, 1732600186397, 1737524032129, 1732205578183, 1732421663655, 1730392120531, 1732423408358, 1732254947286, 1732207548986 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Reviewer_LMc2" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Reviewer_LMc2" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Reviewer_6WyR" ], [ "ICLR.cc/2025/Conference/Submission10196/Area_Chair_bFCD" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Reviewer_HiGE" ], [ "ICLR.cc/2025/Conference/Submission10196/Reviewer_LPhF" ], [ "ICLR.cc/2025/Conference/Submission10196/Reviewer_6WyR" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Reviewer_LPhF" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ], [ "ICLR.cc/2025/Conference/Submission10196/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks!\", \"comment\": \"Thank you very much for your valuable feedback and for improving the score!\"}", "{\"title\": \"Additional experiment on WenetSpeech dataset to validate the effectiveness and generalization ability of CR-CTC\", \"comment\": \"To validate the effectiveness and generalization ability of CR-CTC with a large amount of training data, we additionally train the Zipformer-L model with CTC and CR-CTC, seperately, on a 10k-hour Mandarin dataset, WenetSpeech (https://github.com/wenet-e2e/WenetSpeech). Experimental results (WER %) on test sets (TEST_NET/TEST_MEETING) demonstrate that CR-CTC can still significantly improve the CTC performance.\\n\\n* CTC, train for 18 epochs, greedy-search-decoding: 7.73/10.81 ; prefix-search-decoding: 7.73/10.83\\n* CR-CTC, train for 9 epochs, greedy-search-decoding: **6.68/8.74** ; prefix-search-decoding: **6.63/8.63**\\n\\n**Notably, the relative improvements on TEST_MEETING (which is out-of-domain) are around 20%, demonstrating the generalization ability of CR-CTC.** We hope this addresses your concerns and that you will reconsider our work. Thank you very much!\"}", "{\"title\": \"Thanks for the detailed reply!\", \"comment\": \"Really appreciate the author update the related work, I've updated my score.\"}", "{\"title\": \"Thanks and response to concerns (Part 2)\", \"comment\": \"> For a deeper understanding, the authors could include results showing the impact of increased time-masking on the baseline CTC model as well. This would help isolate whether the benefit comes from the two-branch architecture of CR-CTC or simply from more aggressive augmentation. Although the authors reported one of the baselines with larger time-masking, it would be helpful if results were provided for the other tables as well.\\n\\nIn the original manuscript, Table 5, we provided the result of using a larger amount of time masking (ratio = 2.5) for the CTC baseline model, which led to worse result. In addition, we reported the result of using larger frequency masking for CR-CTC, which led to a WER degradation of 0.07% on test-clean. This indicates that the performance gain from increasing the amount of time masking is primarily due to the masked prediction behavior, rather than merely increasing the input diversity for the two branches, or the more aggressive augmentation.\\n\\nWe conducted this ablation study on LibriSpeech, consistent with our other ablation studies.\\n\\nIn response to the comment, we conducted an additional experiment using larger amount of time masking for the CTC baseline on Aishell-1 dataset, and the results also showed a performance degradation:\\n- CTC: 4.47, 4.8\\n- CTC, larger time masking: 4.49, 4.86\\n\\n> In Table 3, the best results were obtained using Zipformer XL. However, the authors should: 1. Explain the rationale for using Zipformer-M in Tables 4 and 5 instead of Zipformer-XL. 2. Provide results for Zipformer-XL in Table 11 for completeness. 3. Clarify whether the results in Table 11 are from self-distillation or masked prediction in CR-CTC.\\n\\nIn this work, our ablation studies (e.g., Tables 4, 5, 6, and 7) were all conducted using Zipformer-M encoder on the 1k-hour LibriSpeech dataset. The model size is moderate and the dataset is widely used. This is also consistent with the Zipformer paper [1]. We believe our choice is appropriate and not computation-costly for ablation studies. \\n\\nIn response to the comment, we have conducted experiments to train Zipformer-XL with CTC and CR-CTC, respectively. The results conform that CR-CTC significantly outperforms the CTC baseline:\\n- CTC, 2.35/5.45\\n- CR-CTC, 2.02/4.34\\n\\nHowever, the results reported in the literature for the LibriSpeech dataset, such as those from Zipformer [1], Branchformer[2], E-Branchformer[3], and Conformer [4], typically use models with a maximum size of no more than 200M parameters. For example, in the Zipformer paper, the largest model reported for LibriSpeech dataset is Zipformer-L. We believe this limitation is due to the risk of overfitting when training larger models on the 1k-hour LibriSpeech dataset. In our experience, we tried training a Zipformer-XL with the pruned transducer loss, but it resulted in overfitting and performed worse than Zipformer-L. Therefore, we did not include Zipformer-XL in the LibriSpeech table. This choice was also made to ensure a fair comparison in terms of model size with other state-of-the-art models. \\n\\nTables 4, 5, and 6 presents the ablation study results for self-distillation, masked prediction, and peak suppression, which explain the three perspectives for CR-CTC. We have clarified this in the table captions. All other tables present the results with our final CR-CTC model, unless otherwise specified. \\n\\n[1] Yao, Zengwei, et al. \\\"Zipformer: A faster and better encoder for automatic speech recognition.\\\" The Twelfth International Conference on Learning Representations. 2024.\\n\\n[2] Peng, Yifan, et al. \\\"Branchformer: Parallel mlp-attention architectures to capture local and global context for speech recognition and understanding.\\\" International Conference on Machine Learning. PMLR, 2022.\\n\\n[3] Kim, Kwangyoun, et al. \\\"E-branchformer: Branchformer with enhanced merging for speech recognition.\\\" 2022 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2023.\\n\\n[4] Gulati, Anmol, et al. \\\"Conformer: Convolution-augmented transformer for speech recognition.\\\" arXiv preprint arXiv:2005.08100 (2020).\\n\\n> The results of SR-CTC in Table 6 are slightly worse than those of CR-CTC. Do the authors have any explanation for this behavior, and does SR-CTC also use increased time-masking?\\n\\nIn this work, we provide three explanation perspectives for the essential behaviors in CR-CTC: self-distilltion, masked prediction, and peak suppression. Inspired by the point of peak suppression, we additionally propose the simple method, SR-CTC, specifically designed to learn smoother CTC distributions (Appendix Section A.1), which is experimentally validated to be effective (Table 6). The reason that it is worse than our final CR-CTC is as expected, since it does not benefit from the other two key behaviors: self-distilltion, masked prediction. Unlike CR-CTC, SR-CTC does not leverage the token distributions from another branch, thereby lacking the masked prediction behavior. So we didn't use larger time masking for SR-CTC.\"}", "{\"title\": \"Thanks and response to concerns (Part 1)\", \"comment\": \"We sincerely thank the reviewer for the detailed review and valuable comments, which have helped improve the clarity and quality of our work. Below, we provide detailed responses to each of the reviewer's concerns.\\n\\n> Some smaller details are a bit unclear.\\n\\nWe have updated the manuscript to improve its clarity. Please see the responses below. \\n\\n> Abstract starts a bit strange. It says CTC is worse than RNN-T and AED. Yes sure, we know. But then it talks about some method to maybe improve CTC. So why is mentioning RNN-T/AED relevant? Is it because you think the gap between CTC and AED/RNN-T is larger than what you would expect, and some methods like the presented here should close the gap? But I don't think that this really is being shown here in this work. Also, some variant of this method could maybe be applied to AED/RNN-T just as well. So, I don't really see why mentioning AED/RNN-T in the abstract is really relevant for this work here. It's fine in the introduction, to put CTC into perspective, but I don't think it's relevant in the abstract. I was a bit confused about this.\\n\\nThanks for pointing this out. Our main goal is indeed to improve CTC performance, and the results demonstrate that our proposed CR-CTC achieves state-of-the-art results comparable to those attained by transducer and CTC/AED. In the revised manuscript, we have refined the abstract to make the description more clear.\\n\\n> Eq 3 and also Figure 1, the CTC loss is maybe better formulated on z, not on x? I found it weird that x goes into L_CTC but z goes into L_CR.\\n\\nThanks! We have replaced x with z in L_CTC, in the revised manuscript. \\n\\n> \\\"a time warping factor of 80\\\" - what does that mean? I don't think you make the sequence 80 times longer?\\n\\nSorry for making that confusing. As described in the Section 4.1, we use Lhotse (https://github.com/lhotse-speech/lhotse) for data preparation. \\\"time_warp_factor\\\" is a parameter of the \\\"time_warp\\\" function (https://github.com/lhotse-speech/lhotse/blob/master/lhotse/dataset/signal_transforms.py#L338). Specifically, it specifies the maximum range (in frames) around a randomly selected center point on the time axis where the warping can occur. The \\\"warped\\\" index is chosen randomly within the range [center - factor, center + factor]. Then it interpolates the fisrt \\\"center\\\" frames to \\\"warped\\\" frames (denoted as A), and interpolates the remaining \\\"T - center\\\" frames to \\\"T - warped\\\" frames (denoted as B), where T is the length of input length. The obtained two parts A and B are concatenated as the result. We have added a footnote with the link to make it more clear in the revised manuscript.\\n\\n> Please clarify the downsampling of the Zipformer. Do you stick to the original Zipformer here, where the Conv Frontend downsamples the 100Hz feature frames to 50Hz, and then the residual/bypass connection is always at 50Hz, and at the very end, you downsample again to get 25Hz output frames, i.e. the log probs are at 25 Hz?\\n\\nYes. We use the original downsampling rates of Zipformer. It takes input features at frame rate of 100Hz, processes the sequence through 6 stacks with frame rates of 50Hz, 25Hz, 12.5Hz, 6.25Hz, 12.5Hz, and 25Hz, and finally produces the encoder output at frame rate of 25Hz. \\n\\nThanks for your suggestion. We have added this information in the revised manuscript. \\n\\n> \\\"auxiliary head of AED\\\" (p9, l483) (and also same with transducer) / Table 7: I don't exactly understand what you report here. Is the AED (or transducer) head just used as an aux loss, and during recognition, you only use the CTC head and ignore the AED (or transducer)? Please be more clear about that. Also, you are giving the wrong citation for that. The reference you give is about joint AED/CTC, where both AED and CTC heads are used for recognition, so nothing is ignored, nothing is just used as aux loss. The only reference I know where AED is used as an aux loss for CTC is \\\"Keep Decoding Parallel With Effective Knowledge Distillation From Language Models To End-To-End Speech Recognisers\\\", Hentschel et al, 2024.\\n\\nThanks for the suggestion. Yes, in Table 7, the AED and transducer heads are discarded after training, with only the CTC head retained for inference. In the revised manuscript, we have clarified this point for improved clarity and updated the reference with the one you suggested.\\n\\n> Table 3 caption: \\\"GigaSpeeech\\\" typo.\\n\\nThanks. We have corrected it in the revised manuscript. \\n\\n> Transducer w/ CR-CTC, what exactly is that? The same approach applied on transducer? But then this is not CTC? Or is it combined CTC with transducer?\\n\\nAs described in the first paragraph of Section 4.2, \\\"Pruned transducer w/ CR-CTC\\\" refers to using CR-CTC as an auxiliary loss to improve the pruned transducer model. The system consists of a CTC head and a transducer head, with consistency regularization only applied on the CTC head (i.e., CR-CTC). After training, the CTC head is discarded, retaining only the transducer head for inference.\"}", "{\"title\": \"Thanks for feedback\", \"comment\": [\"Thank you very much for your feedback!\", \"First, we would like to summarize the contributions of this work:\", \"We propose CR-CTC, which enforces consistency between two CTC distributions obtained from different augmented views of the input mel-spectrogram. We also provide in-depth insights into its essential behaviors from three perspectives: self-distillation, masked prediction, and peak suppression.\", \"Experiments on LibriSpeech, Aishell-1, and GigaSpeech datasets domenstrate that CR-CTC significantly improves the CTC performance, achieving state-of-the-art results comparable to those attained by transducer or CTC/AED.\", \"In response to the reviews, we have also made **several updates** to the manuscipt:\", \"We have updated the paragraphs discussing related works on consistency regularization in Section 2 to more explicitly clarify the distinctions of our work.\", \"We have supplemented an additional self-distillation experiment, EMA-distilled CTC, in Section 4.3. Experimental results in Table 4 show that CR-CTC significantly outperforms EMA-distilled CTC.\", \"We have conducted additional experiments on LibriSpeech dataset **using a Conformer encoder**, to validate the effectiveness and generalization ability of CR-CTC. **Results in Appendix A.7 demonstrate that it is also effective with Conformer, significantly surpassing standard CTC and achieving slightly better results compared to CTC/AED and transducer.**\", \"CTC, 77.4M, 2.92/7.15\", \"CTC/AED, 103.1M, 2.5/5.94\", \"Pruned transducer, 78.6M, 2.49/5.87\", \"CR-CTC, 77.4M, 2.43/5.78\", \"We have also updated some details to improve clarity.\", \"Do you have any further questions? We would be happy to address any concerns you may have. If there are no other issues, would you consider updating the score?\"]}", "{\"summary\": \"This paper applies a new type of self-consistency loss on different augmented view for CTC based ASR model. The new consistency regularized loss is doing KL over CTC output. Experimental results shows WER got improved.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed idea is intuitive. Although the gain is small, the paper is compared with couple competitive baseline on librispeech.\", \"weaknesses\": \"The paper lack of citation to many very relevant work:\", \"most_important_one\": \"Contrastive siamese network for semi-supervised speech recognition (https://arxiv.org/pdf/2205.14054). The paper focus on compare with SoTA, instead of compare with literature self-distill or other consistency based baseline. In that paper, it include practical trick to make SimSiam type of model work for ASR.\", \"questions\": \"Please properly cite literature work and make proper comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks and response to concerns (part 2)\", \"comment\": [\"> Please properly cite literature work and make proper comparison.\", \"Thank you for pointing out the missing references. We have made the following updates in the first revised version:\", \"In the revised version of the manuscript, we have **updated the paragraphs discussing related works on consistency regularization in Section 2 to more explicitly clarify the distinctions of our work.** Additionally, while we have already cited many relevant references in the original version, we have now **added the following previously omitted ones:**\", \"Khorram, Soheil, et al. \\\"Contrastive siamese network for semi-supervised speech recognition.\\\" ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022. (mentioned by Reviewer LMc2)\", \"Sapru, Ashtosh. \\\"Using data augmentation and consistency regularization to improve semi-supervised speech recognition.\\\" (2022). (mentioned by Area Chair bFCD)\", \"He, Kaiming, et al. \\\"Momentum contrast for unsupervised visual representation learning.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\", \"Jiang, Dongwei, et al. \\\"Speech simclr: Combining contrastive and reconstruction objective for self-supervised speech representation learning.\\\" arXiv preprint arXiv:2010.13991 (2020).\", \"Weninger, Felix, et al. \\\"Semi-supervised learning with data augmentation for end-to-end ASR.\\\" arXiv preprint arXiv:2007.13876 (2020)\", \"In response to the comment, we have **supplemented an additional self-distillation experiment in Section 4.3.** Specifically, we construct a teacher model by tracking the model weights using exponential moving average (EMA), and incorporates an auxiliary loss to learn from the CTC distribution of the teacher model. We refer to this method as **EMA-distilled CTC**, with details provided in Appendix Section A.6. **Experimental results in Table 4 show that CR-CTC significantly outperforms EMA-distilled CTC (2.12/4.62 vs. 2.31/5.25).**\"]}", "{\"title\": \"Thanks and response to concerns (Part 3)\", \"comment\": \"> Unclear how well this method works in other cases, e.g. other models, other datasets, some other hyperparams different. Specifically, I tested it in my setup, and it didn't really helped.\\n\\n> Note, as your method is very simple to implement, and your improvements here are really impressive, I was just trying it out myself. However, with negative result: For my Conformer CTC baseline, on 100Hz inputs, downsampled by factor 6, with BPE 10k vocab, with aux AED loss (\\\"Keep Decoding Parallel With Effective Knowledge Distillation From Language Models To End-To-End Speech Recognisers\\\", Hentschel et al, 2024), where my baseline with greedy decoding without LM was at 5.93% on dev-other, it degraded with CR-CTC to 5.99% on dev-other. I halved the number epochs and halved the batch size for the CR experiment, just like you did. This is with CR loss scale 0.2. I did not adapt SpecAugment yet, but from your paper, I would expect that even with this setting, I should already see quite some improvement. So, why don't I? Your paper is lacking such study on other settings, as mentioned above (Conformer, other BPE sizes, other downsampling) to know whether I can/should expect similar improvements there or not, and whether I maybe need a very different CR loss scale there, or whether I need to care about other things.\\n\\n**In response to the comments, as mentioned above, we have conducted additional experiments to evaluate the use of a Conformer encoder, a larger BPE vocabulary size of 10k, and alternative encoder downsampling rates of 2 and 8. The results from these experiments consistently demonstrate that CR-CTC significantly improves CTC performance.**\\n\\nConcerning the lack of improvement with CR-CTC in your system, **I suspect the issue might be caused an incorrect relative scales when summing up different losses.** This could be related to how the **\\\"reduction\\\"** parameter is specified for the batch of loss values. For example, the reduction in our PyTorch-based code is as follows:\\n\\n```python\\n# ctc_output: (2 * batch_size, seq_len, vocab_size), the log-probs\\n# ctc_output_lens: (2 * batch_size,)\\n# targets: (sum(target_lengths))\\n# target_lengths: (2 * batch_size,)\\n\\n# Compute CTC loss\\nctc_loss = torch.nn.functional.ctc_loss(\\n log_probs=ctc_output.permute(1, 0, 2), \\n targets=targets.cpu(),\\n input_lengths=ctc_output_lens.cpu(),\\n target_lengths=target_lengths.cpu(),\\n reduction=\\\"sum\\\", \\n)\\n\\n# Compute CR loss\\ncr_targets = ctc_output.detach().chunk(2, dim=0) # stop-grad\\ncr_targets = torch.cat([cr_targets[1], cr_targets[0]], dim=0) # exchange\\ncr_loss = nn.functional.kl_div(\\n input=ctc_output,\\n target=cr_targets,\\n reduction=\\\"none\\\",\\n log_target=True,\\n) \\nlength_mask = pad_mask(ctc_output_lens).unsqueeze(-1) # True for padding positions\\ncr_loss = cr_loss.masked_fill(length_mask, 0.0).sum()\\n\\n# The following lines are optional if we are using Adam-like optimizer (which is invariant to gradient scale)\\n# Scale ctc_loss and cr_loss by the total number of frames \\ntot_frames = ctc_output_lens.sum().item()\\nctc_loss = ctc_loss / tot_frames\\ncr_loss = cr_loss / tot_frames\\n```\\nIn compliance with the anonymous review policy, the link to our complete code will be included in the final version of the paper. Perhaps you could refer to it then. \\n\\nIf your system is a hybrid CTC/AED, another potential issue may arise from how the loss scales for CR loss, CTC loss, and AED loss are specified. **It is important to maintain the relative scale between the CR loss scale and CTC loss scale, for example, keeping it at 0.2.** If your original loss weights were 0.1 for the CTC loss and 0.9 for the AED loss, then with the CR loss, the new scaling would be 0.02 for the CR loss, 0.1 for the CTC loss, and 0.9 for the AED loss.\"}", "{\"title\": \"Additional experiment to validate the effectiveness and generalization ability of CR-CTC on 50k-hour training data\", \"comment\": [\"To validate the effectiveness and generalization ability of CR-CTC with a large amount of training data, we additionally train the Zipformer-XL model with CTC and CR-CTC, seperately, on a **50k-hour English dataset**, LibriHeavy (https://github.com/k2-fsa/libriheavy), and decode on LibriSpeech test sets. (Specifically, in line with all experiments in the maniscript, as CR-CTC involves two model forward pass, we train the CR-CTC model with half the batch size and half the number of epochs compared to the CTC model, ensuring a fair comparison in terms of training cost.) **Experimental results (WER %) on LibriSpeech test sets (test-clean/test-other) demonstrate that it can still significantly improve the CTC performance, when using a large amount of training data (50k hours)**:\", \"CTC, train for 12 epochs, greedy-search-decoding: 2.14/4.65; prefix-search-decoding: 2.14/4.66\", \"CR-CTC, train for 6 epochs, greedy-search-decoding: 1.94/3.57; prefix-search-decoding: 1.92/3.58\"]}", "{\"title\": \"Thanks and response to concerns (Part 1)\", \"comment\": \"We sincerely thank the reviewer for the detailed review and valuable comments, which have helped improve the clarity and quality of our work. Below, we provide detailed responses to each of the reviewer's concerns.\\n\\n> The method generates two different augmented views by independently applying existing SpecAugment to Zipformer. However, it raises the question of how generalizable this claim is when applied to other architectures like Conformer, E-Branchformer, or Branchformer.\\n\\n**In response to the comment, we have conducted additional experiments on LibriSpeech dataset using a 12-layer Conformer encoder, to validate the effectiveness and generalization ability of CR-CTC.** We compare CR-CTC with standard CTC, CTC/AED, and pruned-transducer. We train the CTC model for 100 epochs, and the other three models with 50 epochs. The following experimental results demonstrate that it is also effective with Conformer, significantly surpassing standard CTC and achieving slightly better results compared to CTC/AED and pruned transducer. \\n\\n- CTC, 77.4M, 2.92/7.15\\n- CTC/AED, 103.1M, 2.5/5.94\\n- Pruned transducer, 78.6M, 2.49/5.87\\n- CR-CTC, 77.4M, 2.43/5.78\\n\\n**We have supplement these results in Appendix A.7 of the revised version of manuscript.**\\n\\n> Why was the choice of alpha set to 0.2 in Equation 3? It would benefit readers if the authors could provide results from an ablation study showing the impact of different alpha values on performance. This would offer greater insight into the method's sensitivity to this hyperparameter choice.\\n\\nThanks for the comment. In the original version of manuscript, we presented the results of using different values for alpha (0.1, 0.2, 0.3) in Appendix A.5, Table 15 (Now Table 16). Setting alpha to 0.2 yielded the best result. We also mentioned this in the first paragraph of Section 4.3. \\n\\n> Why was a beam size of 4 specifically chosen for comparisons with other state-of-the-art models? The authors may consider including results with different beam sizes (e.g., 1, 4, 8) in an appendix to show the method's sensitivity to this parameter.\\n\\n**Our experimental results show that the performance of CTC models (both CR-CTC and standard CTC) is not sensitive to the beam size for prefix-search-decoding:**\", \"ctc_baseline_prefix_search_decoding\": [\"beam-size=1: 2.52/6.02\", \"beam-size=2: 2.52/6.02\", \"beam-size=4: 2.52/6.02\", \"beam-size=8: 2.52/6.02\"], \"cr_ctc_with_prefix_search_decoding\": \"- beam-size=1: 2.1/4.61\\n- beam-size=2: 2.1/4.61\\n- beam-size=4: 2.1/4.61\\n- beam-size=8: 2.1/4.61\\n\\n**Thanks for the suggestion. We have added the results in Appendix Section A.3, Table 14, in the revised manuscript.**\\n\\n> The authors employed a larger amount of time-masking by increasing both the number of time-masking regions and the maximum masking fraction by a factor of 2.5. However, it would be interesting to know how much time-masking is optimal for masked prediction. The authors could provide results from an ablation study showing performance with different amounts of time-masking (e.g., 1x, 1.5x, 2x, 2.5x, 3x) for ZipformerXL and at least one baseline model in Table 15. This would help readers understand how critical this choice is and how generalizable the method is.\\n\\n**In the original manuscript, we presented results of different amount of time masking (ratio=1, 1.5, 2.0, 2.5, 3) for CR-CTC in Table 15 (now Table 16). The ratio of 2.5 got the best result.** \\n\\nOur ablation studies were all conducted with Zipformer-M encoder. We did not conduct ablation studies with the 286M-parameter Zipformer-XL on the 1k-hour LibriSpeech dataset, due to the risk of overfitting associated with training such a large model on this dataset. We provide a more detailed explanation in the latter response. \\n\\n> In Tables 1, 2, and 3, are the CTC/AED baseline results reported with CTC-only decoding or CTC/AED joint decoding? This could be clarified by specifying the decoding methods for the baselines and the proposed method in the table captions.\\n\\nSorry for the omitted declaration of the decoding method for CTC/AED. **For CTC/AED systems, we used joint decoding that combines CTC scores and AED scores [1].** This information has been added to Section 4.1 (Implementation Details) in the revised version of the manuscript.\\n\\n[1] Watanabe, Shinji, et al. \\\"Hybrid CTC/attention architecture for end-to-end speech recognition.\\\" IEEE Journal of Selected Topics in Signal Processing 11.8 (2017): 1240-1253.\"}", "{\"summary\": \"This paper proposes an improved version of Connectionist Temporal Classification (CTC) called Consistency-Regularized CTC (CR-CTC) for automatic speech recognition (ASR). CR-CTC enforces consistency between two CTC distributions obtained from different augmented views of the input speech mel-spectrogram. The proposed method has 3 advantages: 1) it conducts self-distillation between random pairs of sub-models that process different augmented views; 2) it learns contextual representation through masked prediction for positions within time-masked regions, especially when we increase the amount of time masking; 3) it suppresses the extremely peaky CTC distributions, thereby reducing overfitting and improving the generalization ability. Extensive experiments on LibriSpeech, Aishell-1, and GigaSpeech datasets demonstrate the effectiveness of CR-CTC, which achieves performance comparable to, or even slightly better than, that of transducer and CTC/AED.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"CR-CTC takes two different augmented views of the same speech mel-spectrogram as the inputs and enforce consistency between the two obtained CTC distributions. This method helps the model to do self-distillation between randomly sampled sub-models, learn contextual representation through masked prediction and reduce the peaky CTC distribution. The idea is simple and easy to implement. The training cost didn\\u2019t increase based on the description in section 4.1. The experiments on LibriSpeech, Aishell-1, and GigaSpeech show the proposed method outperform standard CTC in all these sets for different model sizes. It achieves comparable accuracy or even better than the advanced transducer or CTC/AED model. The paper also provided detailed ablation study to help the reader understand more details about the method.\", \"weaknesses\": \"Comparing results in table 1 and 3, the advantages of CR-CTC over standard CTC is smaller for GigaSpeech set than that for LibriSpeech set. This may indicate that the proposed method may not work very well for big training data, e.g. tens of thousands of speech hours.\\n In table 1, 2 and 3, it\\u2019s not mentioned that the results for transducer and CTC/AED models are from beam search or greedy search. For these two models, the results of beam or greedy search usually have big differences. If the results given are from greedy search, it may mean the accuracy of CR-CTC model still have gap from that of transducer and CTC/AED model with beam search.\", \"questions\": \"For the combination of transducer and CR-CTC model, does the CTC score is used during the decoding?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposed to use consistency regulation to improve CTC training by enforcing constancy between two CTC distributions obtained from different augmented views of the input speech Mel-spectrogram. The proposed method is simple but effective, showing impressive improvements from solid baseline on Librispeech, Aishell-1, and Gigaspeech.\\n\\nReviewers questioned the novelty, scalability, and applicability of the method to other architectures beyond Zipformer. The authors rebutted by highlighting their focus on supervised training as opposed to self-supervised training in existing literature. They added more discussion of the related works, including missing references in the initial submission. To address architectural concerns, further experiments with Conformer showed similar improvements. Additionally, tests on the 10k-hour WenetSpeech dataset demonstrated convincing gains, alleviating scalability concerns.\\n\\nDespite some reservations about the novelty, I concur with reviewers that the method is straightforward and effective. Considering the authors addressed reviewers\\u2019 concerns during rebuttal, the paper has the value for publication.\\n\\nIn summary, the strength of this paper is that it presents a simple but effective method to improve CTC performance by regularizing the constancy between two CTC distributions obtained from different augmented views of the input speech Mel-spectrogram. The weakness of the paper is that the consistency regularization has been studied in the literature (e.g., for self-supervised RNN-T training) although there is no work for supervised CTC training.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers questioned the novelty, scalability, and applicability of the method to other architectures beyond Zipformer. The authors rebutted by highlighting their focus on supervised training as opposed to self-supervised training in existing literature. They added more discussion of the related works, including missing references in the initial submission. As a result, one reviewer raised the score from 3 to 6.\\n\\nTo address architectural concerns, further experiments with Conformer showed similar improvements. Additionally, tests on the 10k-hour WenetSpeech dataset demonstrated convincing gains, alleviating scalability concerns. \\n\\nOverall, the authors addressed reviewers\\u2019 concerns during rebuttal. Therefore, despite some reservations about the novelty, I tend to accept the paper.\"}", "{\"title\": \"Summary of our revisions\", \"comment\": [\"We would like to express our sincere gratitude to all the reviewers for their insightful comments and constructive feedback, which have significantly contributed to improving the quality of our paper. We also greatly appreciate the guidance provided by the Area Chair. Initially, we received scores of 3, 5, 8, 8, which have improved to 6, 5, 8, 8 after the rebuttal process.\", \"Below, we provide a summary of the revisions made in response to the reviewers' suggestions.\", \"We have updated the paragraphs discussing related work on consistency regularization in Section 2 to more clearly highlight the distinctions between our work and existing approaches. (In response to Reviewer LMc2 and Area Chair bFC)\", \"We have added an additional self-distillation experiment, EMA-distilled CTC, in Section 4.3. The experimental results in Table 4 demonstrate that CR-CTC significantly outperforms EMA-distilled CTC (2.12/4.62 vs. 2.31/5.25). (In response to Reviewer LMc2)\", \"Reviewers 6WyR and Reviewer HiGE expressed concerns that the improvements observed on the 10k-hour GigaSpeech dataset were smaller than those on the 1k-hour LibriSpeech dataset. First, we clarified that this is expected, as regularization methods typically show smaller gains when using larger amount of training data due to a reduced tendency for overfitting. Second, we also conducted additional experiments on the 10k-hour WenetSpeech dataset and the 50k-hour LibriHeavy dataset. The results demonstrate that the improvements are still substantial: 7.73/10.83 -> 6.63/8.63 with WenetSpeech and 2.14/4.66 -> 1.92/3.58 with LibriHeavy. We believe these significant improvements further validate the effectiveness and generalization ability of our approach, and we hope they address the reviewers' concerns.\", \"We have added experiments on the LibriSpeech dataset using a Conformer encoder in Appendix A.7. The results demonstrate that our CR-CTC is also effective with Conformer, significantly outperforming standard CTC and achieving slightly better results compared to CTC/AED and the transducer model. (In response to Reviewer HiGE and Reviewer LPhF)\", \"In response to the comments, we have corrected typos, adjusted the table positions to optimize space usage, and clarified the omitted implementation details.\"]}", "{\"summary\": \"Consistency regularization (CR) is a well established existing method, where you forward through some model two times with different augmentation (and maybe also dropout or other randomness) to get two predictions, and then you minimize the symmetric KL between both, or similar. Thus, this method is purely on the training side, and doesn't change any modeling aspect.\\n\\nHere, CR is applied to speech recognition, specifically to CTC models, on a frame-by-frame basis, called CR-CTC. The main difference in the two branches is caused by different SpecAugment masking.\\n\\nFor fair comparison, due to forwarding the data twice now, the number of epochs and the batch size are both halfed for the CR-CTC case.\\n\\nExperiments are done on Librispeech, Gigaspeech and Aishell.\\n\\nThe method is mainly tested on CTC, but then some extension to that is when they used a joint AED/CTC model in the end, where CR is applied only to the CTC part.\\n\\nA number of ablations has been made on the loss scale and on increasing the SpecAugment masking, which seems ot help more on CR-CTC, but for pure CTC, the original amount of SpecAugment masking already seems optimal.\", \"the_improvements_on_librispeech_test_other_are_quite_large\": \"From 5.72% WER to 4.35% WER.\\n\\nIt is also shown that it reduces the peakiness of the alignment behavior a bit.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Simple method.\", \"Seems to give huge improvements, at least in some settings.\"], \"weaknesses\": [\"Some smaller details are a bit unclear.\", \"Only tested for Zipformer. More standard would be Conformer, but this is missing.\", \"Unclear how well this method works in other cases, e.g. other models, other datasets, some other hyperparams different. Specifically, I tested it in my setup, and it didn't really helped.\"], \"questions\": \"Abstract starts a bit strange. It says CTC is worse than RNN-T and AED. Yes sure, we know. But then it talks about some method to maybe improve CTC. So why is mentioning RNN-T/AED relevant? Is it because you think the gap between CTC and AED/RNN-T is larger than what you would expect, and some methods like the presented here should close the gap? But I don't think that this really is being shown here in this work. Also, some variant of this method could maybe be applied to AED/RNN-T just as well. So, I don't really see why mentioning AED/RNN-T in the abstract is really relevant for this work here. It's fine in the introduction, to put CTC into perspective, but I don't think it's relevant in the abstract. I was a bit confused about this.\\n\\nEq 3 and also Figure 1, the CTC loss is maybe better formulated on z, not on x? I found it weird that x goes into L_CTC but z goes into L_CR.\\n\\n\\n\\\"a time warping factor of 80\\\" - what does that mean? I don't think you make the sequence 80 times longer?\\n\\nPlease clarify the downsampling of the Zipformer. Do you stick to the original Zipformer here, where the Conv Frontend downsamples the 100Hz feature frames to 50Hz, and then the residual/bypass connection is always at 50Hz, and at the very end, you downsample again to get 25Hz output frames, i.e. the log probs are at 25 Hz?\\n\\nDid you investigate how the downsampling influences the CR-CTC loss? I think this can have quite a crucial impact, as we know that in general, for CTC, the ratio of input length to output length plays an important role for convergence and training dynamics.\\n\\nDid you also test with other vocab sizes? 500 BPE size seems quite small to me. How does it perform with larger vocab sizes, e.g. with 10k?\\n\\nHow does the Zipformer influence the results? Specifically, do you think you get the same improvements with a normal Conformer?\\n\\n\\n\\\"auxiliary head of AED\\\" (p9, l483) (and also same with transducer) / Table 7: I don't exactly understand what you report here. Is the AED (or transducer) head just used as an aux loss, and during recognition, you only use the CTC head and ignore the AED (or transducer)? Please be more clear about that. Also, you are giving the wrong citation for that. The reference you give is about joint AED/CTC, where both AED and CTC heads are used for recognition, so nothing is ignored, nothing is just used as aux loss. The only reference I know where AED is used as an aux loss for CTC is \\\"Keep Decoding Parallel With Effective Knowledge Distillation From Language Models To End-To-End Speech Recognisers\\\", Hentschel et al, 2024.\\n\\n\\nOn GigaSpeech, improvement seems much less (XL, test: 10.87 -> 10.28) compared to Librispeech (5.72 -> 4.35). Why?\", \"table_3_caption\": \"\\\"GigaSpeeech\\\" typo.\\n\\n\\nTransducer w/ CR-CTC, what exactly is that? The same approach applied on transducer? But then this is not CTC? Or is it combined CTC with transducer?\\n\\n\\nNote, as your method is very simple to implement, and your improvements here are really impressive, I was just trying it out myself. However, with negative result: For my Conformer CTC baseline, on 100Hz inputs, downsampled by factor 6, with BPE 10k vocab, with aux AED loss (\\\"Keep Decoding Parallel With Effective Knowledge Distillation From Language Models To End-To-End Speech Recognisers\\\", Hentschel et al, 2024), where my baseline with greedy decoding without LM was at 5.93% on dev-other, it degraded with CR-CTC to 5.99% on dev-other. I halved the number epochs and halved the batch size for the CR experiment, just like you did. This is with CR loss scale 0.2. I did not adapt SpecAugment yet, but from your paper, I would expect that even with this setting, I should already see quite some improvement. So, why don't I? Your paper is lacking such study on other settings, as mentioned above (Conformer, other BPE sizes, other downsampling) to know whether I can/should expect similar improvements there or not, and whether I maybe need a very different CR loss scale there, or whether I need to care about other things.\\n\\n\\nNote, halving the batch size can have other effects. Many methods (e.g. optimizer, LR schedule, regularization, etc) don't work in the same way for different batch sizes. You do effectively more updates to the model. It can also have a regularization effect. So, I think an important missing experiment is: What happens to the baseline when you half the batch size? Maybe you also get improvements there?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"x\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thank you for addressing my comments and providing additional analysis. I believe these improvements have enhanced the overall presentation of the paper. I am satisfied with the revisions and will maintain my score.\"}", "{\"comment\": \"1 For the accuracy improvement comparison between LibriSpeech and GigaSpeech. Yes, CR-CTC still improves the accuracy obviously compared with CTC, but the relative gain for LibriSpeech is larger than that for GigaSpeech (~20% vs. less than 10%).\", \"2_for_the_decoding_method_for_transducer_and_aed_models\": \"yes, the comparison is fair if beam search are also used for these models.\"}", "{\"title\": \"Thanks for review and feedback\", \"comment\": \"Thank you for your detailed review, valuable comments, and patient feedback. We hope our revisions have addressed your concerns. If there are any further details we can improve, please feel free to let us know.\\n\\nWe sincerely appreciate your time and effort. Thanks a lot!\"}", "{\"title\": \"Thanks for feedback\", \"comment\": \"We\\u2019re happy that our response addressed your concerns. Thank you for your feedback!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thanks and response to concerns (part 1)\", \"comment\": \"We sincerely thank the reviewer for the valuable and insightful comments and pointing out the missing references, which greatly contribute to improving the clarity and quality of our work. Below, we address each of the reviewer's concerns in detail.\\n\\n> the gain is small\\n\\n**We would like to argue that the performance gain achieved by CR-CTC is highly significant,** as denomstrated by the experimental results on LibriSpeech, AiShell-1, and GigaSpeech datasets (Table 4, 5, 6). (Note: our primary goal is to enhance CTC performance and narrow the performance gap between CTC and transducer or CTC/AED systems.) Below, we summarize our experimental results:\\n- **CR-CTC significantly improves the performance of CTC.** For example, on LibriSpeech, CR-CTC achieves the following WER (%) improvements: 2.85/6.89 -> 2.52/5.85 with Zipformer-S, 2.52/6.02 -> 2.1/4.61 with Zipformer-M, 2.5/5.72 -> 2.02/4.35 with Zipformer-L. \\n- **CR-CTC achieves results comparable to, or even slightly better than, those of transducer and CTC/AED. It is worth mentioning that this is the first work to enable CTC models to match the performance of transducer and CTC/AED systems.**\\n- CR-CTC can further improve the performance of transducer and CTC/AED when employed for jointly training, achieving **new state-of-the-art results.** For instance, on LibriSpeech, with Zipformer-L encoder, CR-CTC achieves the following WER (%) improvements: 2.09/4.59 -> 1.96/4.08 for CTC/AED, 2.00/4.38 -> 1.88/3.95 for pruned transducer. \\n- CR-CTC also clearly surpasses the straightforward methods that use an auxiliary head of AED or transducer for jointly training to improve CTC performance (Table 7). \\n\\n> The paper lack of citation to many very relevant work:\", \"most_important_one\": \"Contrastive siamese network for semi-supervised speech recognition (https://arxiv.org/pdf/2205.14054). The paper focus on compare with SoTA, instead of compare with literature self-distill or other consistency based baseline. In that paper, it include practical trick to make SimSiam type of model work for ASR.\\n\\nWe would like to clarify the **main distinctions between our work and the self/semi-supervised ASR works using consistency regularization:**\\n- For the self/semi-supervison works, such as C-Siam and Speech SimCLR, consistency regularization is employed as unsupervised objective to train transformer encoder on unlabeled speech data. These works primarily focus on addressing the training issue of the shortcut learning problem such as via reconstruction loss in Speech SimCLR and temporal augmentation in C-Siam. **In contrast, our work focuses on a fully supervised setting, where we use the consistency loss as a regularization term to improve performance of CTC model trained on labeled data. Since the consistency regularization is enforced on CTC distributions, which are stably supervised by the main CTC loss, it inherently avoids the training issues associated with the unsupervised objectives as in Speech SimCLR and C-Siam.** \\n- Moreover, when applying the consistency regularization on CTC distributions, we provide indepth insights into its essential behaviors from different perspectives: **self-distillation, masked prediction which learns contextual representations, and peak suppression which mitigates overfitting and improves the model\\u2019s generalization ability.** These are empirically validated by ablation studies in Section 4.3 (Table 4, 5, 6). **Notably, this is the first work to indentify that simply suppressing the peaky CTC distributions can clearly improve the CTC performance,** as demonstrated by our additionally proposed Smooth-Regularized CTC (SR-CTC), which is specifically designed to learn smoother CTC distributions (Appendix Section A.1).\"}", "{\"title\": \"Thanks and response to concerns (Part 2)\", \"comment\": [\"> Only tested for Zipformer. More standard would be Conformer, but this is missing.\", \"> How does the Zipformer influence the results? Specifically, do you think you get the same improvements with a normal Conformer?\", \"**In response to the comments, we have conducted additional experiments on LibriSpeech dataset using a 12-layer Conformer encoder, to validate the effectiveness and generalization ability of CR-CTC.** We compare CR-CTC with standard CTC, CTC/AED, and pruned-transducer. We train the CTC model for 100 epochs, and the other three models with 50 epochs. **The following experimental results demonstrate that it is also effective with Conformer, significantly surpassing standard CTC and achieving slightly better results compared to CTC/AED and pruned transducer.**\", \"CTC, 77.4M, 2.92/7.15\", \"CTC/AED, 103.1M, 2.5/5.94\", \"Pruned transducer, 78.6M, 2.49/5.87\", \"CR-CTC, 77.4M, 2.43/5.78\", \"**We have supplement these results in Appendix A.7 of the revised version of manuscript.**\", \"> Did you investigate how the downsampling influences the CR-CTC loss? I think this can have quite a crucial impact, as we know that in general, for CTC, the ratio of input length to output length plays an important role for convergence and training dynamics.\", \"In our paper, we adopt the commonly used downsampling rate of 4, where the input frame rate is 100 Hz, and the output frame rate is 25 Hz.\", \"In response to the comment, we have conducted additional experiments on LibriSpeech dataset to investigate the impact of **different encoder downsampling rates (2, 4, 8)** on CTC and CR-CTC, with Zipformer-M encoder, by changing the downsampling rates in the output Downsample module in Zipformer. **The following experimental results on test-clean/test-other (WER %) with greedy-search-decoding demonstrate that CR-CTC consistently outperforms CTC baseline across different downsampling rates.** Interestingly, increasing the downsampling rate from 4 to 8 slightly improves the CTC baseline, though its performance remains notably inferior to that of CR-CTC.\", \"Downsampling rate = 2:\", \"CTC, train for 100 epochs, 3.25/7.91;\", \"CR-CTC, train for 50 epochs, 2.38/5.36\", \"Downsampling rate = 4 (current setting):\", \"CTC, train for 100 epochs, 2.51/6.02;\", \"CR-CTC, train for 50 epochs, 2.12/4.62\", \"Downsampling rate = 8:\", \"CTC, train for 100 epochs, 2.44/5.67;\", \"CR-CTC, train for 50 epochs, 2.12/4.74\", \"> Did you also test with other vocab sizes? 500 BPE size seems quite small to me. How does it perform with larger vocab sizes, e.g. with 10k?\", \"As described in the manuscript, our experiments are conducted using the Icefall framework, where the default BPE size is set to 500 for both the LibriSpeech and GigaSpeech datasets.\", \"**In response to the comment, we have conducted additional experiments on LibriSpeech dataset to test using a large vocab size of 10k with Zipformer-M encoder.** Experimental results show that increasing the vocab size from 500 to 10k leads to performance degradation for both CTC and CR-CTC. This indicates that the vocab size of 10k might be too large for the 1k-hour LibriSpeech dataset. **It is worth mentioning that CR-CTC still significanly outperforms CTC baseline with the vocab size of 10k.**\", \"BPE vocab size = 500 (current setting):\", \"CTC, train for 50 epochs, 2.77/6.6\", \"CR-CTC, train for 25 epochs, 2.3/5.23\", \"BPE vocab size = 10k:\", \"CTC, train for 50 epochs, 3.03/6.65\", \"CR-CTC, train for 25 epochs, 2.46/5.46\", \"> Note, halving the batch size can have other effects. Many methods (e.g. optimizer, LR schedule, regularization, etc) don't work in the same way for different batch sizes. You do effectively more updates to the model. It can also have a regularization effect. So, I think an important missing experiment is: What happens to the baseline when you half the batch size? Maybe you also get improvements there?\", \"As CR-CTC requires two forward pass during training, we train CR-CTC models with half the batch size and half the number of epochs compared to CTC models, ensuring a fair comparison in terms of training cost. **The total number of iterations remains the same for both models.**\", \"**In response to the comment, we have experimented with using half the batch size and half the number of epochs for the CTC baseline.** Similar to CR-CTC, this adjustment resulted in each epoch having twice the number of iterations. **However, our results indicate that this leads to performance degradation for the CTC baseline model.**\", \"CTC Baseline: train for 100 epochs, 2.51/6.02\", \"CTC, train for 50 epochs, half batch size, 2.76/6.5\", \"CR-CTC, train for 50 epochs, half batch size, 2.12/4.62\"]}", "{\"summary\": \"This paper proposes a method to improve CTC performance by applying self-distillation between sub-models using drop-based techniques. The approach aims to enhance target token distribution predictions within time-masked regions and develop contextual representations from unmasked segments, drawing inspiration from self-supervised learning methods. By increasing time-masking, this technique promotes effective masked prediction, reducing peaky CTC distributions and strengthening the model's generalization ability. Experiments on multiple datasets\\u2014Librispeech, Aishell-I, and GigaSpeech\\u2014demonstrate that the proposed method achieves performance on par with transducer and CTC/AED models when used in joint training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper presents a simple yet effective distillation method by extending time-masking to develop contextual representations from unmasked segments across two different augmented views.\", \"It demonstrates that CTC performance is comparable to that of transducer and CTC/AED models.\"], \"weaknesses\": \"The method generates two different augmented views by independently applying existing SpecAugment to Zipformer. However, it raises the question of how generalizable this claim is when applied to other architectures like Conformer, E-Branchformer, or Branchformer.\", \"questions\": [\"Why was the choice of alpha set to 0.2 in Equation 3? It would benefit readers if the authors could provide results from an ablation study showing the impact of different alpha values on performance. This would offer greater insight into the method's sensitivity to this hyperparameter choice.\", \"Why was a beam size of 4 specifically chosen for comparisons with other state-of-the-art models? The authors may consider including results with different beam sizes (e.g., 1, 4, 8) in an appendix to show the method's sensitivity to this parameter.\", \"The authors employed a larger amount of time-masking by increasing both the number of time-masking regions and the maximum masking fraction by a factor of 2.5. However, it would be interesting to know how much time-masking is optimal for masked prediction. The authors could provide results from an ablation study showing performance with different amounts of time-masking (e.g., 1x, 1.5x, 2x, 2.5x, 3x) for ZipformerXL and at least one baseline model in Table 15. This would help readers understand how critical this choice is and how generalizable the method is.\", \"In Tables 1, 2, and 3, are the CTC/AED baseline results reported with CTC-only decoding or CTC/AED joint decoding? This could be clarified by specifying the decoding methods for the baselines and the proposed method in the table captions.\", \"For a deeper understanding, the authors could include results showing the impact of increased time-masking on the baseline CTC model as well. This would help isolate whether the benefit comes from the two-branch architecture of CR-CTC or simply from more aggressive augmentation. Although the authors reported one of the baselines with larger time-masking, it would be helpful if results were provided for the other tables as well.\", \"In Table 3, the best results were obtained using Zipformer XL. However, the authors should:\", \"1. Explain the rationale for using Zipformer-M in Tables 4 and 5 instead of Zipformer-XL.\", \"2. Provide results for Zipformer-XL in Table 11 for completeness.\", \"3. Clarify whether the results in Table 11 are from self-distillation or masked prediction in CR-CTC.\", \"The results of SR-CTC in Table 6 are slightly worse than those of CR-CTC. Do the authors have any explanation for this behavior, and does SR-CTC also use increased time-masking?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks and response to concerns (Part 4)\", \"comment\": [\"> On GigaSpeech, improvement seems much less (XL, test: 10.87 -> 10.28) compared to Librispeech (5.72 -> 4.35). Why?\", \"Thanks for the comment. We agree that the performance gain of CR-CTC on the 10k-hour GigaSpeech dataset is smaller compared to the 1k-hour LibriSpeech dataset. **This is consistent with our expectation, as a regularization method typically provides smaller performance gain with larger training datasets due to reduced overfitting.** However, we would like to emphasize the following results on GigaSpeech dataset:\", \"**CR-CTC still significantly improves the performance of CTC models**\", \"**CR-CTC achieves performance comparable to CTC/AED and pruned transducer models with Zipformer-L/XL encoders**\", \"**Using CR-CTC for joint training can further enhance the performance of both CTC/AED and pruned transducer models.**\", \"**To validate the effectiveness and generalization ability of CR-CTC with a large amount of training data, we additionally train the Zipformer-XL model with CTC and CR-CTC, seperately, on a 50k-hour English dataset, LibriHeavy (https://github.com/k2-fsa/libriheavy), and decode on LibriSpeech test sets.** (Specifically, in line with all experiments in the maniscript, as CR-CTC involves two model forward pass, we train the CR-CTC model with half the batch size and half the number of epochs compared to the CTC model, ensuring a fair comparison in terms of training cost.) **Experimental results (WER %) on LibriSpeech test-clean/test-other demonstrate that it can still signficicantly improve the CTC performance:**\", \"CTC, train for 12 epochs, greedy-search-decoding: 2.14/4.65; prefix-search-decoding: 2.14/4.66\", \"CR-CTC, train for 6 epochs, greedy-search-decoding: 1.94/3.57; prefix-search-decoding: 1.92/3.58\"]}", "{\"title\": \"Thanks for feedback\", \"comment\": \"Thank you very much for your feedback! We have supplemented additional experimental results to validate the effectiveness and generalization ability of CR-CTC on 50k-hour training data, as detailed in the comment below. Thank you again!\"}", "{\"title\": \"Thanks and response to concerns\", \"comment\": \"We sincerely thank the reviewer for the detailed review and valuable comments, which have helped improve the clarity and quality of our work. Below, we provide detailed responses to each of the reviewer's concerns.\\n\\n> Comparing results in table 1 and 3, the advantages of CR-CTC over standard CTC is smaller for GigaSpeech set than that for LibriSpeech set. This may indicate that the proposed method may not work very well for big training data, e.g. tens of thousands of speech hours.\\n\\nThanks for the comment. We agree that the performance gain of CR-CTC on the 10k-hour GigaSpeech dataset is smaller compared to the 1k-hour LibriSpeech dataset. This aligns with our expectation, as regularization methods typically yield smaller gains with larger training datasets due to reduced overfitting. However, we would like to highlight the following results on GigaSpeech dataset:\\n- **CR-CTC still significantly improves the WER (%) performance of CTC models:**\\n - 12.08/11.95 \\u2192 11.68/11.58 with Zipformer-S,\\n - 11.23/11.27 \\u2192 10.62/10.72 with Zipformer-M,\\n - 11.16/11.16 \\u2192 10.31/10.41 with Zipformer-L,\\n - 10.8/10.87 \\u2192 10.15/10.28 with Zipformer-XL.\\n- **Compared to CTC/AED and pruned transducer models, CR-CTC achieves comparable performance on Zipformer-L/XL models.**\\n- **Employing CR-CTC for joint training further improves the performance of both CTC/AED and pruned transducer models.** Specifically, with Zipformer-XL, it gets WER (%) performance improvements: 10.22/10.33 -> 9.92/10.07 for CTC/AED, 10.09/10.2 -> 9.95/10.03 for pruned transducer.\\n\\n> In table 1, 2 and 3, it\\u2019s not mentioned that the results for transducer and CTC/AED models are from beam search or greedy search. For these two models, the results of beam or greedy search usually have big differences. If the results given are from greedy search, it may mean the accuracy of CR-CTC model still have gap from that of transducer and CTC/AED model with beam search.\\n\\nSorry for the omitted declaration of the decoding methods used in our results for the transducer and CTC/AED models. For the pruned transducer models, we did employ beam search decoding [1], while for the CTC/AED models, we did use joint decoding by combining attention-based and CTC scores [2]. **Therefore, the comparisons are fair.** Thanks for your question. This information has been added to Section 4.1 (Implementation Details) in the revised version of the manuscript.\\n\\n[1] Kang, Wei, et al. \\\"Fast and parallel decoding for transducer.\\\" ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023.\\n\\n[2] Watanabe, Shinji, et al. \\\"Hybrid CTC/attention architecture for end-to-end speech recognition.\\\" IEEE Journal of Selected Topics in Signal Processing 11.8 (2017): 1240-1253.\\n\\n> For the combination of transducer and CR-CTC model, does the CTC score is used during the decoding?\\n\\nWhen using CR-CTC as an auxiliary loss to improve transducer models, **we only utilize the transducer head for decoding, without incorporating the CTC scores.** Thanks for your question. We have added this information in the first paragraph of Section 4.2.\"}" ] }
CIcMuee69B
A probablistic automata learning approach for analyzing and sampling constrained LLM
[ "Matías Carrasco", "Franz Mayr", "Sergio Yovine", "Johny Kidd", "Martin Iturbide", "Juan Pedro da Silva Barloco", "Alejo Garat" ]
We define a congruence that copes with null next-symbol probabilities that arise when the output of a language model is constrained by some means during text generation. We develop an algorithm for efficiently learning the quotient with respect to this congruence and evaluate it on case studies for analyzing statistical properties of LLM.
[ "Grammatical Inference", "Probabilistic Deterministic Finite Automata", "Active Learning", "LLM" ]
Reject
https://openreview.net/pdf?id=CIcMuee69B
https://openreview.net/forum?id=CIcMuee69B
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zTFCKgj6im", "wZEHhBLkm8", "ufgtBQd59R", "uagakRq7iC", "tSnKyWB5o5", "mKDDbGvvnZ", "lcPtoozXwW", "iWMloBG8kx", "hdUFJFuV1Y", "gzr6AJF7Jl", "fxTItvo0J3", "fvqzm8Vaw3", "fov9O8IM5Z", "eYa8oBMtSs", "cJ5WMuKcl4", "X8sf1TVrGn", "WxrxGbcl9i", "Wk0NGSmfD2", "WCWwpHjcib", "VMBQVaYzXH", "TsydRkLNmS", "SoACZl7ZIs", "Pa870LK79s", "N9wNBbHxN7", "KoKOiS3cK8", "JOilRBh9DU", "JIGHX3GS1M", "GsmKIOZbLv", "GNRmkdcbz0", "GN4osPo0oM", "FHDhLTTJ59", "DkqcvnP0i2", "CTlBlRHVrJ", "Bkmk7oylX3", "A9uOHCw2lf", "9RWztdf5Tr", "88JyVkbXE9", "6r4eG6wzjw", "5rdk00zDZu", "5mLPKSApov", "1svR6XkO6e" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732237235838, 1732238721210, 1732207726157, 1730383811601, 1732058240859, 1732766620137, 1730716691505, 1732290319575, 1732616880007, 1732767567217, 1733272787930, 1732058557720, 1730790462557, 1734731757321, 1732389095275, 1732061023563, 1733164721890, 1733174250289, 1732621538091, 1730699213291, 1732236654822, 1732624040400, 1732207432282, 1732207780411, 1732388310675, 1733227698158, 1732059104710, 1732388492287, 1732060308498, 1732235486360, 1737523906418, 1732289937796, 1732290115767, 1730728359046, 1733220784132, 1732389845403, 1732237651358, 1732391026338, 1732625210482, 1732767121893, 1732240791733 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_hTvK" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_gJvq" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_VvAN" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_gJvq" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_VvAN" ], [ "ICLR.cc/2025/Conference/Submission8404/Area_Chair_MYd2" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_SS7n" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_sah2" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_SS7n" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_SS7n" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_VvAN" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_VvAN" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_SS7n" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_VvAN" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Reviewer_sah2" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ], [ "ICLR.cc/2025/Conference/Submission8404/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Question 1\", \"comment\": \"1. Should I read $\\\\mathcal{L}(\\\\sigma_1\\\\ldots\\\\sigma_n)$ as $P[\\\\sigma_n | \\\\sigma_1\\\\ldots\\\\sigma_{n-1}]$ ?\\n\\nNotice that $\\\\mathcal{L}(\\\\sigma_1\\\\ldots\\\\sigma_n)$ is a probability distribution.\\n\\n$\\\\mathcal{L}(\\\\sigma_1\\\\ldots\\\\sigma_{n-1})(\\\\sigma_n)$ would be $P[\\\\sigma_n | \\\\sigma_1\\\\ldots\\\\sigma_{n-1}]$ provided $P[\\\\sigma_1\\\\ldots\\\\sigma_{n-1}]$ is not zero.\"}", "{\"title\": \"Response to Question 3\", \"comment\": \"3. How would you define an arbitrary finite-dimensional marginal of $\\\\mathbf{P}$?\\n\\nFor instance, if we consider an alphabet of two symbols $a$ and $b$, and the cylinder set $C$ = \\u201cthe first symbol is $a$, and the third symbol is $b$\\u201d, then its probability can be computed as the sum of the probabilities of the cylinders $C_1$ = \\u201cthe first symbol is $a$, the second symbol is $a$, and the third symbol is $b$\\u201d and $C_2$ = \\u201cthe first symbol is $a$, the second symbol is $b$, and the third symbol is $b$\\u201d.\"}", "{\"title\": \"Response to weaknesses - Section 2 - Technical issue (b)\", \"comment\": \"Definition (2) is taken from [1,11] and is given as the quotient P(uw)/P(u)=P(vw)/P(v), so zero probabilities in the denominator give undefined quotients. In the case one side of the equation is undefined the equality must be understood as implying that the other side is also undefined. In your example, u and v are not congruent according to definition (2). So, the format in which you state the equality (as a product instead of a quotient) is not equivalent to our definition (2). We will clarify this in the paper.\"}", "{\"summary\": \"Many automata theoretic techniques have been introduced to analyse\\u00a0neural sequence-processing recognizers \\u00a0by composing them with automata theoretic formalisms with the purpose of verifying properties on-the-fly. In this work the authors propose an approach to combine automata theoretic formalisms with language generators, such\\u00a0as neural language models, in order to guide the generation process or constrain the text generation with some common sampling strategies. In this context, the occurrence of symbols that\\u00a0have zero probability of appearing is a problem. For\\u00a0example, the generation may not terminate, or the model may not define a\\u00a0probability distribution over\\u00a0finite strings.\\n\\nIn this paper, the authors define a notion of \\u00a0Myhill-Nerode-like congruence over strings which takes into account the occurrence of zero-probabilities, that provides an underlying formal basis for learning of probabilistic deterministic finite automata (PDFA) from neural language models constrained both by by automata and sampling\\u00a0strategies. Another contribution is that they propose a new algorithm for learning the quotient with respect to this congruence. The authors provide experimental evidence that the new approach has some advantages with respect to existing approaches. Finally, the authors providea framework to analyze statistical properties of LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"In my opinion, the paper is quite interesting. There is currently an impending need to understand the behavior of LLMs when their operation is controlled by external such as automata and more general formalisms. In this work the authors provide a contribution in this direction.\", \"weaknesses\": \"The paper as a whole is a whole is a fair contribution of the topic of LLMs. Nevertheless, the results follow by extending and adapting known constructions within automata theory. In this sense, I did not find the results of the paper particularly surprising.\", \"questions\": \"I do not have questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Comments on Section 3\", \"comment\": \"We would like to thank the reviewers for the detailed comments. All reviews have pointed out that Section 3 lacks sufficient detail to understand how the algorithm works as it depends on results on the literature and on context on automata learning. We will write a general response and rewrite Section 3 taking into account the comments.\"}", "{\"title\": \"Revised version\", \"comment\": \"We uploaded a revised version of the paper that we hope addresses the weaknesses and questions raised by the reviewers.\"}", "{\"summary\": \"The paper introduces a new congruence on words w.r.t. a given a language model $\\\\mathcal{L}:\\\\Sigma^*\\\\to\\\\Delta(\\\\Sigma)$, which can be used to learn surrogate probabilistic deterministic finite automata using a Myhill-Nerode type theorem. It builds heavily on ideas introduced in [6], where a similar congruence and a corresponding tree-based learning algorithm have been developed, but addresses the problem of null probabilities, which can occur if the model output is externally constrained, e.g. by an automaton.\\n\\nMore precisely, the congruence is defined relative to an equivalence relation $E$ on the probability simplex $\\\\Delta(\\\\Sigma)$\\n$$\\nu\\\\equiv_E v :\\\\Leftrightarrow \\\\mathbb{1}(u) = \\\\mathbb{1}(v) \\\\land \\\\forall w\\\\in\\\\Sigma^*: \\\\mathbb{1}(uw) = \\\\mathbb{1}(vw) =1 \\\\to \\\\mathcal{L}(uw) =_E \\\\mathcal{L}(vw), \\n$$\\nwhere $\\\\mathbb{1}(u)$ indicates whether the string $u$ has positive probability under the model.\\nAs mentioned earlier, the main difference to the work done in [6] is the explicit handling of null probabilities. The definition ensures that $0$ probability transitions do not need to be explored while still maintaining a meaningful congruence. As a consequence, the QNT algorithm from [6] can be adopted to avoid $0$-probability transitions as much as possible. This is useful to keep the number of queries to the LLM as low as possible during learning.\\n\\nThe developed algorithm is compared against to other variants of QNT and shows significantly reduced learning times. Finally, the authors conduct experiments to exemplify the use of their algorithm. First, they investigate the influence of the tokenizer on the next token probabilities by learning surrogate automata under the top-k equivalence on $\\\\Delta(\\\\Sigma)$. Second, they restrict the output of a GPT-2 model numbers between 0 and 1 and learn a surrogate automata for the constraint output. They compare the output distribution of the LLM with the one from the automata and demonstrate that the automaton is able to closely approximate the original distribution.\\n\\nI think the paper presents a meaningful improvement of the approach presented in [6], but I would like to criticize that the paper reads too much like an addendum to [6] in many places. This is especially true for Section 3. Without reading [6], it would not have been possible for me to comprehend this section. Another weakness is that the example use cases are arguably toy experiments. Finally, I have to point out that the submission is not according to the guidelines (missing line numbers). Because of the mentioned weaknesses, I can only give a weak recommendation for acceptance.\\n\\n[6] F. Mayr, S. Yovine, M. Carrasco, F. Pan, and F. Vilensky. A congruence-based approach to active automata\\nlearning from neural language models. In PMLR, volume 217, pages 250\\u2013264, 2023.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Reduce number of LLM queries during learning by omitting 0-probability transitions\"], \"weaknesses\": [\"Heavily builds upon [6]. Especially Section 3 was not really comprehensible without reading [6].\", \"The submission is not in the format that is demanded in the guidelines (no line number!)\"], \"questions\": [\"Page 3 (Quotients): $\\\\bar{\\\\pi}([|q|]) = [\\\\pi(q)]$ (?)\", \"Page 3 (Quotients): $\\\\bar{\\\\tau}([|q|], \\\\sigma) = [|\\\\tau(q, \\\\sigma)|]$ (?)\", \"Page 4: What is sift? The paper would generally benefit from relying less on [6] as a reference.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"question 2 clarification\", \"comment\": \"Sorry, I confused Corollary 2.1. with Equation (6). I meant to ask if Equation (6) and rephrasing \\\\equiv_E over E require minimality.\"}", "{\"title\": \"Thanks for the correction\", \"comment\": \"Thanks, that clarified thing for me. I would suggest that the authors thoroughly go through the entire formal exposition again. I think it is highly likely that I did not catch all such typos.\"}", "{\"title\": \"Section 3\", \"comment\": \"We have rewritten Section 3 in the revised version, adding a more detailed explanation of the algorithm and its inner operations, and an example of run inspired by the case study of the generation of real numbers consisting on sequences of digits.\"}", "{\"title\": \"Follow up on Kolmogorov Extension Theorem II\", \"comment\": \"Yes, we only considered the case of sequences indexed by the natural numbers with its standard ordering. The version of Kolmogorov's Extension Theorem we use is more adapted to the case when one can define the finite dimensional probabilities inductively, as it is our case with the function $P$. Other examples of standard textbooks that state the theorem in similar ways as we use it here are the following:\\n \\n1. Theorem A.3.1 in Durrett, Rick. Probability: theory and examples. Vol. 49. Cambridge university press, 2019.\\n\\n2. Theorem D.1 in Bass RF. Kolmogorov extension theorem. In: Stochastic Processes. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press; 2011:382-384.\"}", "{\"title\": \"Response to question 2\", \"comment\": \"2. Does Corollary 2.1 hold for all PDFA, or does it require minimality?\\n\\nCorollary 2.1 holds for every language model, therefore including all PDFA. Actually, notice that #[[ \\\\Sigma* ]] is the number of congruence classes over \\\\Sigma* induced by the language model. For a PDFA A #[[ \\\\Sigma* ]] turns out to be the number of states of the quotient \\\\bar{A} which is the smallest PDFA congruent with A (see Quotients paragraph right below Corollary 2.1). That is, A and \\\\bar{A} define the same congruence. Besides, #[[ \\\\Sigma* ]] may be significantly smaller than the upper bound given by Corollary 2.1 when the percentage of 0-probability transitions increases. We will add a figure together with the experiments in Section 3 to illustrate this.\"}", "{\"summary\": \"# Summary\\nThe paper addresses the problem of language models, where specific symbols during text generation may have a probability of zero. The authors tackle the problem by formalizing the problem in an automata-theoretic setting and defining a congruence relation on sequences that depends on (1) the extension of sequences (as is usual) and (2) the occurrence of symbols with zero probability. Based on the congruence relation, the authors propose an active automata learning called Omit-Zero. They evaluate the runtime of the algorithm in comparison to an existing algorithm called QNT. Finally, they apply the algorithm in case studies where the symbol generation of language models is guided.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Systematic analysis of LLMs is a timely topic, and a solid theoretical foundation can benefit the community.\", \"The algorithm and the claims seem generally sound.\"], \"weaknesses\": [\"The presentation of the paper needs to be significantly improved.\", \"There is a lot of notation and terminology, which is not bad per se, but some parts need to be explained better, and it might be possible to skip some parts. For example, the term \\\"quotient\\\" is used already in the second sentence of the abstract, but only on Page 3 (bottom), it becomes clear what a quotient looks like.\", \"Section 3 can be understood with some knowledge of automata learning, however, I assume that most readers don't immediately know what \\\"sift\\\" means. The relevance of some concepts, like the Hopcroft-Karp algorithm, is unclear.\", \"It seems to me that the paper could benefit from a more stringent focus on the most relevant concept while abstracting away some details.\", \"It is unclear how the case studies relate to the main problem considered by the paper.\", \"Case Study 1: Here, we see a difference between different tokenizer settings, but it is unclear what the effect of the 0-probabilities problem would have been.\", \"Case Study 2: This case study examines the fidelity of learned automata, but the problem and its (potential) effect are not mentioned.\", \"From the introduction, I would have assumed to see an experiment showing non-termination or some similar effect.\"], \"questions\": \"1. Would the case study have been possible with QNT?\\n2. Does Corollary 2.1 hold for all PDFA, or does it require minimality?\\n3. In the runtime experiment, the probability of a symbol to be 0 is at least 0.9. Is this probability realistic when considering the guiding of an LLM? \\n4. Considering the problem of non-termination illustrated in the caption of Fig.1. It seems to me that a simple solution would be to avoid excluding $ using top_r. Am I missing something?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper uses a word congruence for a language model. The goal is to learn (surrogate) probabilistic automata using a typical Myhill-Nerode theorem. Particularly, the problem of sequence with probability 0 is tackled.\\n\\nAll reviewers agree that the paper requires further polishing and clarification, and therefore, another round of peer-review is required.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers acknowledge that the paper has improved during the discussion phase, but still an improvement in writing and presentation is required.\"}", "{\"title\": \"Response to weakness 2\", \"comment\": \"Actually, the case studies are intended to give concrete examples of the use of the learned PDFA. We will try to make their purpose more clear.\\n\\nIt is important to remark that the behavior of an application that embeds an LLM does not depend solely on the LLM, but also on the software harness around it, which includes the tokenizer that transforms human language into tokens, as well as the sampling strategy. The purpose of Sections 2 and 3 is to propose a formal way to deal with 0 probabilities inevitably introduced when constraining the output of the LLM. The goal of Section 4 is to illustrate its application in realistic settings (as studied in other works of the literature such as [5, 14]), for which it is necessary to find a formal way to take into account the tokenizer. Our approach allows considering a sentence in human language as a symbol and relate it to the sequence of tokens depending on the tokenizer, which enables the verifier to define the appropriate level of abstraction and to seamlessly compare different configurations (including different LLM, tokenizers and sampling strategies). \\n\\nIn the first case study, the primary goal is to analyze gender bias in GPT2 as proposed in [5], by looking at the probability distributions of several professions after the words \\u201cman\\u201d and \\u201cwoman\\u201d. The learned PDFA exhibits the probability distribution of the considered professions for each gender. It shows that for both configurations the probability distributions have bias in gender, and that this bias is different for each configuration. In contrast to other works in the literature this is done without resorting to sampling or evaluating on a dataset.\\n\\nNow, from the perspective of model-based analysis, the LLM together with the guide, the sampling strategy, and all of the configuration elements, such as the tokenizer, is the system under analysis, and the learned PDFA is the model of such system on which the analysis will be carried on. In this case, we need to ensure that the learned model is faithful, in the sense that what is concluded on the model can be extrapolated to the system under analysis. \\n\\nTo assess this, the goal of the second case study is to evaluate how close the distribution represented by the learned PDFA is to the one defined by the target system, where GPT2 is guided to generate real numbers in the interval [0\\u20261] as proposed in [14]. Besides, we illustrated how the task could actually be performed using different ways of guiding the LLM. First, with a guide that only allows the LLM to produce one digit at a time, and second, enabling it to pick any of its numeric tokens. Since the PDFA were learned using an equivalence relation between distributions, in this case quantization, it is natural to study to what extent this abstraction leads to a quantifiable divergence with respect to the distribution of the real system. To evaluate this, we perform statistical tests on sets of samples generated by the learned PDFA and the system. In both cases, it is observed that the distributions over the set of numbers of finite length in the interval [0..1] defined by the PDFA approximate quite accurately that of the target system. On the other hand, the experiments revealed that this is also the case for the distribution of lengths when the LLM is guided to generate sequences of digits but not for sequences of numeric tokens.\"}", "{\"title\": \"Response to questions on page 3 (Quotients)\", \"comment\": \"$\\\\bar{\\u03c0}([|q|])=[\\u03c0(q)]$ (?)\\nYes, $\\u03c0$ was missing in the RHS.\\n\\n$\\\\bar{\\u03c4}([|q|],\\u03c3)=[|\\u03c4(q,\\u03c3)|]$ (?)\\nYes, it must be $q$ instead of $[|q|]$ in the RHS.\\n\\nWe have corrected them accordingly.\"}", "{\"title\": \"Follow up on Kolmogorov Extension Theorem\", \"comment\": \"The relevance of condition (ii) depends on how you index the family of finite dimensional probability measures. In the versions of the theorem that you cite, it is assumed that the index $\\\\{t_1,\\\\ldots,t_k\\\\}$ is a subset of positive integers, where no arbitrary specification of their order is given. In this situation it is clear that one needs the invariance of condition (ii) to ensure a well defined measure on the space of infinite sequences. The situation can be illustrated by the following: if one expects the existence of a measure $P$ over infinite sequences $\\\\{x_t\\\\}$ such that $P(x_{t_1}\\\\in B_1,\\\\ldots, x_{t_k}\\\\in B_k)=P_{t_1\\\\ldots t_k}(B_1\\\\times \\\\ldots\\\\times B_k)$, since the intersection $\\\\{x_{t_1}\\\\in B_1,\\\\ldots, x_{t_k}\\\\in B_k\\\\}$ is the same whatever the ordering one chooses to write it, like $\\\\{x_{t_{\\\\pi(1)}}\\\\in B_{\\\\pi(1)},\\\\ldots, x_{t_{\\\\pi(k)}}\\\\in B_{\\\\pi(k)}\\\\}$ for a permutation $\\\\pi$, it is a necessary condition to have (ii). But the situation is quite different if one indexes the finite dimensional probabilities $P_{t_1\\\\ldots t_k}$ using only ordered sets of times $t_1<\\\\ldots<t_k$. In this case it does not make any sense to write $P_{t_{\\\\pi(1)}\\\\ldots t_{\\\\pi(k)}}$ for a permutation $\\\\pi$, because unordered sets of times are not part of the index set. The passage from an indexed family $P_{t_1\\\\ldots t_k}$ that uses only ordered sets $t_1<\\\\ldots<t_k$ to one that uses all sets, so that the permutation invariance (ii) holds and one can use the statement of the theorem that you cite, can be done by trivially defining $P_{t_1\\\\ldots t_k}(B_1\\\\times \\\\ldots\\\\times B_k) = P_{t_{\\\\pi(1)}\\\\ldots t_{\\\\pi(k)}}(B_{\\\\pi(1)}\\\\times \\\\ldots\\\\times B_{\\\\pi(k)})$ where $\\\\pi$ is the permutation that puts the set $\\\\{t_1,\\\\ldots,t_k\\\\}$ in increasing order.\"}", "{\"title\": \"Response to \\\"Conceptual confusion\\\"\", \"comment\": \"To add to this confusion, it is not clear to me which \\\"language models\\\" correspond to probability measures constructed by Kolmogorov extension. It is only for these that the discussion on, e.g., congruences applies.\", \"response\": \"Proposition 2.1 (which relies on Kolmogorov extension theorem) applies to any language model as defined in Definition 1. The definition of congruence does not rely neither on Prop. 2.1 nor on Kolmogorov extension theorem. It only depends on $P$ (not on $\\\\mathbf{P}$).\"}", "{\"title\": \"Kolmogorov's extension theorem\", \"comment\": \"I could not access [10], but standard textbooks can be consulted on this, for example\\n- Athreya, Krishna B., and Soumendra N. Lahiri. Measure theory and probability theory, see theorem 6.3.1\\n- Oksendal, Bernt. Stochastic differential equations: an introduction with applications, see theorem 2.1.5.\\n\\nTwo conditions are needed to define a consistent family \\\\\\\\(P_t\\\\\\\\) of probability measures in this standard formulation:\\n\\n(i) \\\\\\\\( P_{t_1...t_k}(B_1\\\\times ...\\\\times B_{k-1}\\\\times \\\\mathbb{R}) = P_{t_1...t_{k-1}}(B_1\\\\times ...\\\\times B_{k-1}) \\\\\\\\)\\n\\n(ii) \\\\\\\\( P_{t_1...t_k}(B_1\\\\times ...\\\\times B_k) = P_{t_{\\\\pi(1)}...t_{\\\\pi(k)}}(B_{\\\\pi(1)}\\\\times ...\\\\times B_{\\\\pi(k)}) \\\\\\\\) for any permutation \\\\\\\\( \\\\pi\\\\in Sym(k)\\\\\\\\)\\n\\nAn alternative but equivalent formulation can be found in Dudley, Richard M. Real analysis and probability, theorem 12.1.2.\\n\\nThe proof in your submission only covers (i), and from the definitions it is not clear to me how (ii) could even be written down.\"}", "{\"summary\": \"In this paper, the authors consider the problem of learning a probabilistic deterministic finite automaton (PDFA) by interacting with a constrained sequence model with a special sampling scheme, such as a large language model constrained with a given property and sampled with a top-k selection scheme. The main challenge to learn a PDFA in such a setting is that the distribution over strings by the constrained LLM with a particular sampling scheme may assign non-zero probabilities to infinite sequences; in order to even describe an ideal PDFA to learn, one has to come up with an appropriate notion of equivalence on strings that are induced by the constraint and the sampling scheme. In the paper, the authors indeed propose such a notion in a clean general way, define the induced quotienting operators on language models (formalised as maps from finite strings to probability distributions over characters and EOS), and PDFAs, and analyse the properties of these operators. Based on this formal development, they propose an algorithm for learning a PDFA from interactions with a constrained language model with a special sampling scheme formalised by an equivalence relation on distributions over characters and EOS. Their algorithm is applied to simple language models which are based on randomly generated DFAs, and also to the constrained GPT-2 model with a special sampling scheme. The results of these experiments show the promise of the algorithm in the paper.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper describes the issue of a constrained language model assigning non-zero probability to infinite sequences. My understanding is that this is not one of the commonly discussed issues on language models (and their distillations which seem to be related to the work in the paper). Asking an unusual question, I think, is good for the research area.\\n\\n2. The formal development in the paper is thorough and rigorous. Here I mean Section 2 of the paper, which I liked and learnt a lot from. But as I will mention in the weakness box below, I found Section 3 of the paper confusing and hard to follow. \\n\\n3. The authors' learning algorithm is derived from a solid theory.\", \"weaknesses\": \"1. My main reservation is that Section 3 was very difficult to follow. It is the part that describes the main contribution of the paper, namely, the algorithm for learning a PDFA from the interactions with a teacher model (formalised via the EQ query). But the section assumes the familiarity with QNT, and does not describe what sift, build, EQ, update, and InitializeHypothesis do, and how their learning algorithm works . Also, I couldn't understand what path means (which is used in (8)). Illustrating the run of the algorithm with a concrete example may be helpful.\\n\\n2. The next point is not really a weakness, but it partly explains why I am not a strong supporter of the paper. While I read the paper, one question kept popping up to me: what can one do if she or he has a quotiented PDFA that models a given constrained language model with a special sampling scheme (such as top-k)? The abstract says that such a quotiented PDFA can be used to analyse the statistical properties of the language model. Elaborating on this and giving some concrete examples would make the paper more appealing to someone like me.\", \"questions\": \"If the authors can respond to the two points that I mentioned in the weakness box, that would be helpful for me to understand the paper more. But they don't have to do so.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to weaknesses - Section 4\", \"comment\": \"- What exactly are you trying to show and why?\\n\\nThe first goal of Sec. 4 is to formalize what it means to guide an LLM in order to get a well-defined mathematical object, in this case a PDFA, which can be further analyzed. Other works [5,14] do not provide a formalization in terms of a probabilistic automaton. The second goal is to give experimental evidence that by choosing an adequate equivalence the learned PDFA is a faithful representation of the LLM. \\n\\nRegarding case study 2, we agree that the use of \\u201cfloating-point\\u201d, which we have taken from [14] where the case study is proposed, is misleading. Actually, the purpose is to sample numbers in the interval [0,1] written as arbitrarily long sequences of digits following an initial dot. In the first experiment, the LLM is only allowed to use digits, while in the second one, it can use all its tokens representing numbers, that totalize 994 for GPT2. In both cases, the sampling proceeds until the terminal symbol is sampled (no bound is fixed). Guiding LLM is a subject of both theoretical and practical interest and these experiments provide evidence that our approach leads to faithful representations, in the sense that the learned PDFA and the guided LLM are statistically comparable if one looks at the distribution of sampled numbers. \\n\\n- $\\\\kappa$ has not been defined anywhere \\n\\n$\\\\kappa$ is the parameter of the quantization as equivalence between distributions. We will explain this and provide examples of equivalences. \\n\\n- Figs 7(a) and 7(b) are not labelled in this way.\\n\\nFig. 7(a) and 7(b) should be 7(above) and 7(below), respectively. We will put the appropriate to make this clear.\\n\\n- The reader will not know what Outlines [14] means, this must be explained.\\n\\nThe appropriate bibliographic reference to Outlines has been provided. We will add a brief description of the tool to enhance readability. However, explaining the inner workings of it is out of scope of the paper.\"}", "{\"title\": \"Conceptual confusion\", \"comment\": \"What really confused me, and what makes this paper conceptually difficult to read is the first definition followed by the sentence\\n\\\"Language models can be expressed in different ways, e.g., RNN, Transformers, and PDFA\\\"\\nThis is confusing because PDFAs for example span a very small subspace of what you call \\\"language models\\\" in the initial definition. The simple reason is that they are Markovian in the sense that the probability of each symbol in a given state is fixed and therefore does not depend on the path leading to this state. \\\"Language models\\\" on the other hand allow completely arbitrary history dependencies. Yet PDFAs and \\\"language models\\\" are discussed simultaneously throughout the paper, which makes it very difficult to follow.\\n\\nTo add to this confusion, it is not clear to me which \\\"language models\\\" correspond to probability measures constructed by Kolmogorov extension. It is only for these that the discussion on, e.g., congruences applies.\"}", "{\"title\": \"Response to weaknesses - Section 2 - Technical issue (a)\", \"comment\": \"We politely disagree with the reviewer. In the first place, the probability measure $\\\\mathbf{P}$ is clearly and well defined as the unique measure satisfying the conditions stated in page 2 of the paper. Its existence and uniqueness are proved in the appendix F (Prop. F.1).\\n\\nRegarding the measurable sets, it is standard practice to consider the sigma-algebra generated by the cylinder sets when considering probability measures over spaces of infinite sequences over finite alphabets. We will recall it at page 2 of the paper.\\n\\nOur proof of Prop. F.1 is based on Kolmogorov\\u2019s extension theorem, as it is stated in Theorem I.1.2 of [10] in the bibliography. For this, we first need to consider a space of infinite sequences. To achieve that, the construction embeds $\\\\Sigma^* \\\\cup \\\\Sigma^{\\\\omega}$ into $\\\\Sigma_{\\\\$}^{\\\\omega}$ by adding at the end of every finite sequence in $\\\\Sigma^*$ an infinite number of terminal symbols. \\n\\nThis corresponds to the definition of event $A$ in the proof. Then it is enough to define a consistent family of probability measures over the cylinder sets that correspond to prefixes, as required by the cited Theorem I.1.2 of [10] (*). More precisely, for each k>=1 consider the set $Cyl(k)$, of all cylinders $C[a_1,...,a_k]$ of the form $C[a_1,...,a_k]$ = { $(x_i)_{i=1}^{\\\\infty}: x_1=a_1,...,x_k=a_k $ }. \\n\\nIf for each k>=1 we can define a probability $P_k$ over the cylinders $Cyl(k)$ that satisfies the consistency condition (condition (1) in page 1 of [10]: $P_k(C[a_1,...,a_k]) = \\\\sum_{a_{k+1}} P_{k+1}(C[a_1,...,a_{k+1}]))$, then there exists a unique probability measure $\\\\mathbf{P}$ defined on the sigma algebra generated by all cylinders of $\\\\Sigma^{\\\\omega}$ such that $P(C[a_1,...,a_k])=P_k(C[a_1,...,a_k])$. The final step is to prove that the event $A$ (that was previously identified with $\\\\Sigma^* \\\\cup \\\\Sigma^{\\\\omega}$) has P probability one: $\\\\mathbf{P}(A)=1$. Uniqueness of $\\\\mathbf{P}$ also is guaranteed by Theorem I.1.2. of [10]. \\n\\n(*) Notice that the cited theorem does not require the consideration of (a) other cylinder sets or (b) any permutation invariance property. Indeed, for (a) under the cited consistency condition, the probability of any other cylinder set can be obtained by basic operations over sets (disjoint unions, complements, etc). For instance, if we consider two symbols a and b, and the cylinder set C=\\u201cthe first symbol is a, and the third symbol is b\\u201d, then its probability can be computed as the sum of the probabilities of the cylinders C_1=\\u201cthe first symbol is a, the second symbol is a, and the third symbol is b\\u201d and C_2=\\u201cthe first symbol is a, the second symbol is b, and the third symbol is b\\u201d. Secondly, (b) holds automatically since the cited version of Kolmogorov\\u2019s theorem assumes that the times (positions in the sequence) are given in an increasing order so there is no need to rearrange them. This permutation invariance property is only relevant when one defines the finite dimensional probabilities as a family indexed over arbitrary finite sets of times (positions) and is not related to any underlying commutative structure. For instance, if we consider two symbols a and b, and the cylinder set C=\\u201cthe first symbol is a, and the third symbol is b\\u201d, the permutation invariance property guarantees that its probability does not change if one defines it as C=\\u201cthe third symbol is b, and the first symbol is a\\u201d. This is automatic if by convention one defines the probability of cylinders by considering a specific ordering of the times (increasing in our case).\"}", "{\"title\": \"Response to weaknesses - Section 2 - Technical issue (c)\", \"comment\": \"Thanks for the observation. We modified the paper to add the normalization.\"}", "{\"title\": \"Comments about Section 3 (continued) - Overview of the algorithm\", \"comment\": \"Omit-Zero (as well as QNT) is inspired by Kearns and Vazirani\\u2019s algorithm [KV].\\n\\nOmit-Zero keeps a search tree whose leafs (Acc) represent congruence classes and whose inner nodes (Dis) are such that the least common ancestor of two leafs is a string showing that the leafs are actually not congruent. That is why inner nodes are called distinguishing, because they serve as evidence to disprove congruence.\", \"sift_is_the_search_procedure\": \"given a string, it finds the congruence class (leaf) where it possibly belongs to. If no such leaf exists, it means that a new class has been found and it is added as a leaf to the tree by sift-update. Sift is used by the procedure build to construct an automaton from the tree: states are leafs and transitions from one state to another are found using sift (concatenating the string representing the congruence class, so called access string, with all symbols in the support of the leaf). Once the automaton is built, the algorithm checks if it is congruent with the target language model. If they are not, the counterexample is evidence of the existence of a class that is not in the tree. Then, procedure update adds a new leaf to the tree. The algorithm starts with an initial tree containing only the root (the empty string) from which a first automaton is built by InitilizeHypothesis.\\n\\nEQ is the equivalence query, which is the procedure responsible for checking whether the hypothesis and the target are congruent. When the target is an automaton, EQ can be implemented by the Hopcroft-Karp algorithm for testing equivalence of finite automata (reference [3] in the bibliography). We used an adaptation of this algorithm to perform the experimental evaluation of Omit-Zero in Section 3, because it allows to efficiently check equivalence between the obtained PDFA and the target PDFA. Now, when the target system involves an LLM which is not an automaton, it is no longer possible to use it. In this case, it is standard to resort to sampling. In order to ensure that every sampled string $u$ is defined, that is, $P(u)>0$, we sample from the hypothesis PDFA using random walk (which is sound because of Proposition 2.4).\\n\\nSifting a string $v$ defines a path which is the sequence of distinguishing strings (inner nodes) traversed by the sift operation when processing $v$ from the root to the leaf. In order to ensure that an inner node is indeed a valid evidence of non-congruence, it must have a defined prefix (Proposition 2.4). This is guaranteed by requiring that every inner node starts with a symbol in the support of the associated distribution (equation 8). Such requirement is fulfilled jointly by the correct processing of the counterexample which finds a defined prefix of it and procedure update when it adds an inner node. Then, every inner node $w$ in the path followed by sift for a string $v$ to leaf $u$ is a correct evidence that $v$ could not possibly belong to any other equivalence class in the tree different from $u$.\", \"references\": \"[KV] Kearns, M., and Vazirani, U. V. (1994). An Introduction to Computational Learning Theory.\"}", "{\"title\": \"Kolmogorov Extension Theorem II\", \"comment\": \"The standard version of Kolmogorov's extension theorem is based on an inverse system indexed by the following directed poset: (i) all finite _tuples_ of elements of a set \\\\\\\\(T\\\\\\\\) (totally ordered and thought of as time) and (ii) all injections between these (thus \\\\\\\\( (t_1,t_3)\\\\leq (t_3,t_2,t_1) \\\\\\\\)). This explains why one needs both projections and permutations in the consistency conditions. Note that time is ordered but it makes sense to consider joint distributions indexed by any tuple, ordered or not. So this is not what distinguishes your case from this one.\\n\\nDudley's version of the theorem, see op.cit., is based on a more compact representation using the following directed poset: (i) all finite _subsets_ of elements of a set \\\\\\\\(T\\\\\\\\) and (ii) all injections between these. It still requires a much stronger notion of consistency than your theorem.\\n\\nYour version doesn't seem to fall into any of these templates, and I'm therefore not sure calling it Kolmogorov's extension theorem is particularly useful since it conveys all the wrong intuition. If I now understand correctly, you're interested in a much simpler inverse system indexed by (i) \\\\\\\\(\\\\mathbb{N}\\\\\\\\) and (ii) the standard ordering on \\\\\\\\(\\\\mathbb{N}\\\\\\\\) (i.e. \\\\\\\\(k\\\\leq k+1\\\\\\\\)).\"}", "{\"title\": \"Response to question 3\", \"comment\": \"3. In the runtime experiment, the probability of a symbol to be 0 is at least 0.9. Is this probability realistic when considering the guiding of an LLM?\\n\\nIndeed, the probability of 0 is likely to be grater than 0.9 in reality. For instance, Kuchnik et al [5] cited in the bibliography, uses top-40 for analyzing memorization and toxicity (also citing other papers) in the case of GPT-2 which has 50257 tokens, thus giving 0.999 probability of a symbol to have a probability of 0 under this sampling strategy. The same paper uses k=1000 for language understanding, which yields 0.98, and acknowledges that it is a conservative span. For GPT-2, 0.9 would consist in sampling a set of approximately 5000 tokens, which is typically too large according to the literature. Therefore, it would not be realistic to test the algorithm on smaller probabilities.\"}", "{\"title\": \"Response to weakness 1\", \"comment\": \"We added a global response to this comment https://openreview.net/forum?id=CIcMuee69B&noteId=KoKOiS3cK8\\n\\nWe hope it adequately addresses your concerns.\"}", "{\"title\": \"Response to question 4\", \"comment\": \"4. Considering the problem of non-termination illustrated in the caption of Fig.1. It seems to me that a simple solution would be to avoid excluding $ using top_r. Am I missing something?\\n\\nIn general, it may not be appropriate to be able to terminate at every state. Take for example case study 2. The guide does not allow to generate the string \\\".\\\" since it is not a legal number. Allowing $ to be in the support of the distribution of state q1 (Fig. 7(b)) would (wrongly) allow this string to be generated. Something similar happens in case study 1, where it is required to complete a full sentence. \\n\\nTherefore, always having $ in the support of the distribution would prevent capturing these examples.\", \"the_desired_property_could_be_stated_as_follows\": \"the probability of eventually terminating is equal to 1. This corresponds to P_$ (defined in pg. 2) to be a probability distribution.\\n\\nIndeed, this property can be expressed in the probabilistic temporal logic PCTL [PMC] as the formula P(<>p)=1, where p is a proposition that is true at a state q if and only if $ is in the support of the \\u03c0(q). For instance, such formula could be model checked on a probabilistic automaton with the tool PRISM [PRISM].\", \"references\": \"[PMC] Baier, Ch. and Katoen, J.P. Principles of Model Checking. MIT Press, 2008. \\n\\n[PRISM] https://www.prismmodelchecker.org/\"}", "{\"title\": \"Response to weaknesses - Section 2 - Other issues\", \"comment\": \"We added the missing $ symbol at the beginning of section 2, thanks for pointing this out.\\n\\nThe purpose of using double brackets [[.]] simple brackets [.] is to avoid confusion between congruence classes on $\\\\Sigma^*$ (denoted with [[.]]) and equivalence classes on the simplex of probability distributions (denoted with [.]). We believe this enhances readability.\\n\\nThe point we want to emphasize with the examples in Fig. 1 is that in order to guarantee termination when sampling from a PDFA one does not need to have positive probability of the terminal symbol $ at every state. The relevant condition is to have positive probability of terminating in the future of every state. See \\u201cResponse to question 4\\u201d to reviewer VvAN: https://openreview.net/forum?id=CIcMuee69B&noteId=GNRmkdcbz0 We will rephrase the caption of Fig. 1 to clarify this point. \\n\\nPlease notice that $reach(Q)$ is defined where it is used in the paragraph Quotients in page 3 as the set of states reachable from the initial state following a finite sequence of transitions (defined by $\\\\tau$).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Question 4 follow up\", \"comment\": \"This was not what I meant. I mean if $ is in the support before applying top_r it should be in the support afterwards. If it is not in the support before, I would not want to add it.\"}", "{\"title\": \"question 3 follow up\", \"comment\": \"Okay, I understand that, but your experiments use |\\\\Sigma|=20 instead of 50257. Does this not change the setting drastically? With top-40, I have 40 symbols with non-zero probability.\"}", "{\"summary\": \"This submission starts by presenting some background on probabilistic DFAs and describes congruences on these. The congruence of Eq (1) describes strings as equivalent if they agree on the conditional probability of each subsequent letter. The next section presents an algorithm for learning a congruence-minimal DFA based on a previous algorithm from [6]. Finally, the paper ends with a couple of experiments first looking at guiding GPT2 using an automaton, and then comparing sampling between a PDFA and GPT2.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The use of automata in the study of NN-based machine learning is extremely interesting and has led to many new insights and results (e.g. the excellent work of Weiss et al in, inter alia, [13].\", \"weaknesses\": \"As a general comment, this paper reads like a rushed and very early draft. I did not see a unifying story/narrative, and both motivation and details are missing\", \"section_2\": \"\", \"conceptual_issue\": \"When going from eq (1) to eq (4), an equivalence relation E is introduced on the simplex \\\\\\\\(\\\\Delta(\\\\Sigma_\\\\$)\\\\\\\\). Why? There's no motivation, and there are no examples. What are typical examples of equivalence relations on simplices? Whilst (1) is transparent, (4) no longer is. As a consequence, it is hard to understand where the rest of the section is going.\", \"technical_issues\": \"a) The definition of \\\\\\\\(\\\\mathbf{P}\\\\\\\\) is messy and confusing and as a consequence the proof of its existence is incomplete (and perhaps incorrect). With spaces like \\\\\\\\(\\\\Sigma^\\\\omega\\\\\\\\) you cannot dispense with measure theory. So the first step is to identify the measurable subsets of \\\\\\\\(\\\\Sigma^\\\\ast \\\\cup \\\\Sigma^\\\\omega\\\\\\\\). Moreover, since you need to apply Kolmogorov's extension theorem, you in fact need to consider \\\\\\\\(\\\\Sigma_\\\\$^\\\\omega\\\\\\\\). How are the measurable subsets of \\\\\\\\(\\\\Sigma^\\\\ast\\\\\\\\) encoded in those of \\\\\\\\(\\\\Sigma_\\\\$^\\\\omega\\\\\\\\)? Next, you need to define your joint probabilities on your cylinder sets. Here you only look at very specific cylinder sets, and this is why you cannot state, let alone prove, the \\\"permutation invariance\\\" property required by Kolmogorov's extension theorem. That it holds is not immediately obvious to me since we're dealing with a family of conditional probabilities over a non-commutative structure (words).But since the final-dimensional distributions aren't even defined it's hard to say.\\n\\nb) I'm not sure about the proof of 2.1. If we write the definition of the congruence in the better format \\\\\\\\(P(uw)P(v)=P(u)P(vw)\\\\\\\\) and we assume that \\\\\\\\(P(u)=0\\\\\\\\), then we could very well have \\\\\\\\(P(v)>0\\\\\\\\) as long as \\\\\\\\(P(uw)=0\\\\\\\\) (which seems to hold anyway by the recursive factorisation given on page 2).\\n\\nc) The normalisation step is missing in the definition of \\\\\\\\(\\\\mathsf{samptop}\\\\\\\\).\", \"other_issues\": [\"The notation is sub-optimal. The subscript $ is missing at the beginning of sec 2. Why use semantics brackets [[-]] for equivalence classes?\", \"Fig 1: I don't see what is \\\"troublesome\\\" about \\\\\\\\(\\\\mathcal{B}\\\\\\\\), it does exactly what it's supposed to be able to do if you allow non-termination.\", \"Either say that all proofs are in the appendix once, or state them all (there's space).\", \"You don't define \\\\\\\\(reach(Q)\\\\\\\\)\"], \"section_3\": \"The reader needs some context. Why are you developing this algorithm? What is the learner? What is the teacher? What are the allowed \\\"learning actions\\\" (e.g. membership queries? equivalence queries? any other kind of queries?)? What is the general description of the algorithm of [6]? The reader shouldn't have to read [6] to understand what \\\\\\\\(\\\\mathsf{sift}\\\\\\\\) is. As it stands Sec 3 cannot realistically be understood by the average reader.\\n\\nThe labelling of the x-axis of the LHS graph in Fig 3 is unfortunate.\", \"section_4\": \"This section suffers from the same problems as Sec 3. What exactly are you trying to show and why? In particular, using PDFAs seems completely overkill for the kind of simple experiments carried out in case study 1 and Table 2. \\nFor case study 2, the guiding automaton allows every digit at ever stage, so it's not doing very much guiding. What precision of floating-point are you using? And how many digits are sampled? (the precision just gives an upper bound) The reader will not know what Outlines [14] means, this must be explained. As it stands the results for this case study mean noting to me.\\n\\n- \\\\\\\\(\\\\kappa\\\\\\\\) has not been defined anywhere\\n- Figs 7(a) and 7(b) are not labelled in this way.\", \"questions\": \"1. Should I read \\\\\\\\(\\\\mathcal{L}(\\\\sigma_1\\\\ldots\\\\sigma_n)\\\\\\\\) as \\\\\\\\(P[\\\\sigma_n\\\\mid \\\\sigma_1\\\\ldots\\\\sigma_{n-1}]\\\\\\\\)? I don't understand if the basic model is Markovian (as the DFA model suggests) or not (as the definitions on p2 and in Prop F.1 suggest).\\n\\n2. How would you define an arbitrary finite-dimensional marginal of \\\\\\\\(\\\\mathbf{P}\\\\\\\\)?\\n\\n3. What is the motivation for the \\\\\\\\(E\\\\\\\\) introduced before (3)? Can you give an example?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"PDFA helps understanding\", \"comment\": \"Thanks, I can see the value of PDFA helping understanding.\"}", "{\"title\": \"Follow up on question 2\", \"comment\": \"Minimality is not required.\\n\\nEquation (6) defines the congruence over states via the congruence over strings: it says that two states are congruent iff they are reached by congruent strings are congruent. You could have defined congruence over states otherwise: two states $q$ and $q\\u2019$ are congruent iff for every continuation $w$, $[\\\\pi^\\\\ast(q, w)] = [\\\\pi^\\\\ast(q\\u2019,w)]$. Then, it follows that if $q=\\\\tau^\\\\ast(u)$ and $q\\u2019=\\\\tau^\\\\ast(v)$ are congruent then $u$ and $v$ will be congruent as well since $[\\\\pi^\\\\ast(uw)] = [\\\\pi^\\\\ast(q, w)] = [\\\\pi^\\\\ast(q\\u2019,w)] = [\\\\pi^\\\\ast(vw)]$ for all $w$. The reverse also holds: take $u$ and $v$ congruent, then for all $w$, $[\\\\pi^\\\\ast(uw)] = [\\\\pi^\\\\ast(vw)]$, and therefore $[\\\\pi^\\\\ast(q, w)] = [\\\\pi^\\\\ast(q\\u2019,w)]$, so $q$ and $q\\u2019$ are congruent. Hence, both definitions are equivalent.\\n\\nA PDFA is minimal iff for any pair of distinct states $q$ and $q\\u2019$ it happens that $q$ and $q\\u2019$ are not congruent. This implies that if $q=\\\\tau^\\\\ast(u)$ and $q\\u2019=\\\\tau^\\\\ast(v)$, with $q$ different from $q\\u2019$, then $u$ and $v$ are not congruent either. And conversely. Overall this implies that a minimal PDFA has as many states as the quotient defined by the congruence over $\\\\Sigma^\\\\ast$.\"}", "{\"title\": \"Response to Question 2\", \"comment\": \"2. I don't understand if the basic model is Markovian (as the DFA model suggests) or not (as the definitions on p2 and in Prop F.1 suggest).\\n\\nWe don\\u2019t understand what you mean by \\u201cbasic model\\u201d. If one considers the sampling procedure relative to a given language model $\\\\mathcal{L}$ defined by recursively sampling the $n$-th symbol with probability distribution $\\\\mathcal{L}(\\\\sigma_1\\\\ldots\\\\sigma_{n-1})$, provided $P[\\\\sigma_1\\\\ldots\\\\sigma_{n-1}]$ is not zero, then the stochastic process $X_n$ = \\u201cthe $n$-th sampled symbol\\u201d is not in general Markovian. This is even the case for PDFAs where the transition probability from one symbol to another is not well defined unless one specifies the entire past. In these cases the process could be better modeled as a Hidden Markov Process since the stochastic process Q_n=\\u201dthe n-th state of the system\\u201d is indeed Markov. This does not impact the contribution of the paper regarding the active PDFA learning algorithm capable of dealing with zero probability transitions. Besides, it provides a sound basis for comparing sampling from a language model and an extracted PDFA since both processes belong to the same family of stochastic processes (i.e. Hidden Markov Models).\"}", "{\"title\": \"Follow up on question 4\", \"comment\": \"Where to enable termination depends on the application. Our approach does not force any particular behavior.\\n\\nOf course, you can define say a sampling strategy top'_r as you suggest which will let the terminal symbol to remain in the support of the resulting distribution when it is permitted by the guide (since synchronization with the guide is done before applying the sampling strategy) if this is the behavior you want. \\n\\nNow, using top'_ r or top_r may result in different languages because the set of strings for which P_$ is not null may be different. In any case, learning a PDFA will also help checking and understanding that.\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thank you for your response! It helps me to understand the paper better. I am still reluctant to change my score. If I happen to review the next version of the paper with detailed explanation on various operators, I will give a better score to this paper.\"}", "{\"title\": \"response to question 3 follow up and question 1\", \"comment\": \"We added experiments that provide empirical evidence that Omit-Zero is significantly faster than Teacher-Filter (that is QNT with the help of a filter of 0-probability transitions) in case study 2 when 994 numeric tokens are used. Figure 6 in the revised version summarizes these results.\"}", "{\"title\": \"Response to Question 4\", \"comment\": \"4. What is the motivation for the $E$ introduced before (3)? Can you give an example?\\n\\nResorting to some kind of tolerance relation between distributions is usual practice when it comes to approximating the behavior of language models with probabilistic automata (e.g. [Weiss et al 2019, Clark and Thollard, 2004]), in order to group in a single state strings which continuations slightly differ in probability. Eventually, this grouping could result in an approximation with a finite number of states even if the image of the language model contains infinitely many distributions, while keeping the error of the approximation as small as desired or preserving the property to be checked. Moreover, using equivalences instead of tolerances (e.g., reference [6] in the paper) leads to a well-defined notion of algebraic quotient and allows capturing the behavior of the language model under usual sampling strategies such as top-k through the associated top-k equivalence, defined as two distributions are top-k equivalent if their k most probable symbols are the same, or obtaining probabilistic automata which are consistent with specific performance metrics such as word error rate (WER) or normalized discounted cumulative gain (NDCG). In the paper we used quantization equivalence: for instance, for $\\\\kappa=2$ we have the partition of the interval $[0 \\\\ldots 1]$ into the set of quantization intervals $[0]$, $(0,0.5)$, $[0.5,1)$, $[1]$, and two distributions $\\\\delta_1$ and $\\\\delta_2$ are equivalent if for each symbol $\\\\sigma$, $\\\\delta_1(\\\\sigma)$ and $\\\\delta_2(\\\\sigma)$ fall into the same quantization interval. In particular, the singleton intervals $[0]$ and $[1]$ are needed to individualize 0 probabilities. Of course, other variants could be defined. We will clarify this and add the appropriate references.\", \"references\": \"[Weiss, Goldberg, and Yahav, 2019] Learning Deterministic Weighted Automata with Queries and Counterexamples. NeurIPS, 2019.\\n\\n[Clark & Thollard, 2004] Alexander Clark and Franck Thollard. Pac-learnability of probabilistic deterministic finite state automata. Journal of Machine Learning Research, 5:473\\u2013497, 2004.\"}" ] }
CIN2VRxPKU
Evaluating Deep Unlearning in Large Language Models
[ "Ruihan Wu", "Chhavi Yadav", "Russ Salakhutdinov", "Kamalika Chaudhuri" ]
Machine unlearning has emerged as an important component in developing safe and trustworthy models. Prior work on unlearning in LLMs has mostly considered unlearning tasks where a large corpus of copyrighted material or some specific training data are required to be removed. In this work, we consider the task of unlearning a fact from LLMs, which can be challenging as related facts can be deduced from each other, and investigate how well current unlearning methods for LLMs succeed at this task. Specifically, we formally propose a framework and a definition for deep unlearning facts that are interrelated. We design the metric, recall, to quantify the extent of deep unlearning. To enable us to systematically evaluate the extent of deep unlearning undistracted by other factors, we construct a synthetic dataset EDU-RELAT, which consists of a synthetic knowledge base of family relationships and biographies, together with a realistic logical rule set that connects them. We use this dataset to test four unlearning methods in four LLMs at different sizes. Our findings reveal that in the task of deep unlearning only a single fact, they either fail to properly unlearn with high recall, or end up unlearning many other irrelevant facts. Our dataset and code are publicly available at: https://anonymous.4open.science/r/deep_unlearning_anonymous-2C73.
[ "large language models", "machine unlearning", "knowledge base" ]
Reject
https://openreview.net/pdf?id=CIN2VRxPKU
https://openreview.net/forum?id=CIN2VRxPKU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xwxlLix6HI", "u6UhpaWHkb", "meoSUKQP6Y", "lSwbuZAeSp", "ezGZXTujeQ", "Y9KBEOql61", "XElhvZnT5X", "UcAgzUNj8t", "QodZ5QrM6G", "PNGhOaehPq", "Kzs3MLvFzw", "FaPcdgKPMk", "8dwvzQHURP", "6aPsRi4HVK", "2JZMRnrJPz" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "meta_review", "official_comment" ], "note_created": [ 1730605639786, 1732182195269, 1732735970121, 1733195122913, 1732182499577, 1730498284854, 1732698809971, 1732182438036, 1732182000715, 1730721549372, 1732648884347, 1732593383985, 1737523737094, 1734464917025, 1732521498206 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5981/Reviewer_nGBY" ], [ "ICLR.cc/2025/Conference/Submission5981/Authors" ], [ "ICLR.cc/2025/Conference/Submission5981/Authors" ], [ "ICLR.cc/2025/Conference/Submission5981/Authors" ], [ "ICLR.cc/2025/Conference/Submission5981/Authors" ], [ "ICLR.cc/2025/Conference/Submission5981/Reviewer_yUcx" ], [ "ICLR.cc/2025/Conference/Submission5981/Reviewer_nGBY" ], [ "ICLR.cc/2025/Conference/Submission5981/Authors" ], [ "ICLR.cc/2025/Conference/Submission5981/Authors" ], [ "ICLR.cc/2025/Conference/Submission5981/Reviewer_wbf1" ], [ "ICLR.cc/2025/Conference/Submission5981/Authors" ], [ "ICLR.cc/2025/Conference/Submission5981/Reviewer_wbf1" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5981/Area_Chair_6Bzd" ], [ "ICLR.cc/2025/Conference/Submission5981/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces \\\"deep unlearning\\\" as a novel approach in the domain of large language models (LLMs), emphasizing its importance in effectively erasing certain facts. As the target fact can be deduced from logical rules, superficial unlearning methods, which solely unlearn the target fact, cannot unlearn it successfully. The authors also present new metrics, including Accuracy and Recall, to evaluate this process, backed by experiments across various LLMs and unlearning methods.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Novel Contribution: The introduction of deep unlearning and a curated dataset highlights a significant gap in current research, drawing attention to an important and underexplored issue.\\n2. Innovative Metrics: The proposal of Recall as a new evaluation metric, accompanied by a detailed algorithm for addressing the NP-hard nature of the problem, demonstrates thoughtful consideration of the challenges involved in deep unlearning. \\n3. Comprehensive Experiments: The thorough experimental setup across four LLMs and unlearning methods strengthens the paper's credibility and provides valuable insights into the effectiveness of the proposed approach.\", \"weaknesses\": \"1. Code Availability: The lack of code and dataset release until after acceptance may hinder reproducibility and limit the community's ability to validate the findings. If possible, we encourage the author to give an anonymous link to the git repo. If it is forbidden in rebuttal process, you can include more raw samples in Appendix with explanation. More importantly, as you proposed a benchmark, the statistics of the datasets are of vital importance, which should be discussed in the main part, rather than appendix.\\n2. Missing Results: The author did not report the accuracy on the dataset of the models fine-tuned but not subjected to unlearning, which makes it difficult to gauge the impact of the unlearning methods accurately. Please include the baseline accuracy results for the fine-tuned models before unlearning.\\n3. Poor Presentation: The author should improve the writing and figure for better illustration, especially the figures. For instance, Figure 3 and 6 are not clear to the analyzed conclusions. Maybe a organized table will be better. From my perspective, for this work, the related work is crucial for the understanding of the benchmark, which can be moved to Sec 2.\", \"questions\": \"Refer to peaknesses, I hope that you can solve these concerns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Refer to Weaknesses.\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive comments (novel contributions, innovative metrics and comprehensive experiments) and constructive feedback. We reply to your concerns below.\\n\\n**Code availability (Reply to Weakness 1)**: Thank you for your suggestion! We would like to provide our anonymous link (https://anonymous.4open.science/r/deep_unlearning_anonymous-2C73) including the code of all evaluated methods and the dataset. We also have put this link in the revision (as shown in the abstract and experiment section).\\n\\n**Results of finetuned models (Reply to Weakness 2)**: Thanks for pointing this out. After finetuning the models on our synthetic data, the accuracy is 100% for every finetuned model. We have added this description in line 377 of the revised pdf.\\n\\n**Organization of the result presentation and related work (Reply to Weakness 3)**: Thank you for your suggestion, which helped us to improve the presentation. The results of Figure 3 have been also presented in Table 3. We further highlighted the best value across models in the revision. We agree that having related work before introducing the details of our new problem can help with the understanding. We will move the related work section to section 2 in our final revision.\"}", "{\"comment\": \"Dear Reviewer nGBY,\\n\\nThank you for your reply. We would really appreciate it if you could let us know why you would like to keep your score, so we could improve our paper further!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer yUcx,\\n\\nThank you for your constructive feedback. As the discussion stage is closing in a day, we would appreciate it if you could take a look to our responses and let us know if your questions have been addressed. We are happy to discuss if there are any additional questions. Thank you for your time!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"General Comment by Authors\", \"comment\": \"We thank all the reviewers for their time and insightful feedback. We appreciate that the reviewers (nGBY, yUcx) find that our proposed problem is novel and important, the evaluation metric is innovative and reasonable, and the experiments are comprehensive.\\n\\nFor each reviewer's valuable suggestions and critiques, we respond in our individual responses and have edited some paragraphs in the revised paper to improve the paper according to the feedback; the differences are highlighted in blue. We sincerely hope to continue this insightful discussion during the discussion period and would like to thank you again for your constructive feedback!\"}", "{\"summary\": \"This paper introduces the concept of \\\"deep unlearning\\\", which studies the unlearning tasks in large language models where logical relationships between facts need to be considered. The authors constructed a synthetic dataset EDU-RELAT, containing family relationships and biographical information, to evaluate four unlearning methods across four different-sized LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Presents an important and novel problem - deep unlearning that considers logical reasoning between facts\\n2. Constructs a structured evaluation dataset with reasonable logical rules and relationships\\n3. Proposes reasonable evaluation metrics (recall and accuracy) and designs approximate algorithms to compute these metrics\\n4. Comprehensive experiments that reveal the limitations of current methods in deep unlearning\", \"weaknesses\": \"1. While deep unlearning sounds reasonable overall, the paper's setting may not align with practical scenarios. In real machine unlearning cases, related knowledge is typically forgotten together (e.g., forgetting all content related to Harry Potter) rather than just forgetting a single relationship. Under this setting, is it still important to forget all knowledge that could potentially derive the current relationship?\\n\\n2. Given that real-world relationships can be far more complex than the R defined in this paper's dataset, could there be situations where forgetting one piece of knowledge requires forgetting an excessive amount of content? In multi-hop reasoning scenarios, a large amount of knowledge might need to be forgotten, which raises another question: is deep unlearning always necessary in unlearning scenarios? If unlearning is applied for copyright protection, can knowledge derived through multi-hop reasoning constitute infringement?\\n\\n3. The paper only focuses on logical rules in the specific domain of family relationships, which is rather narrow in scope.\\n\\n4. No new solutions are proposed to improve deep unlearning effectiveness, remaining only at the problem analysis level.\", \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the clarification. I have no further question, however I would like to keep my score.\"}", "{\"comment\": \"Thank you for your positive comments (novel and important problem, structured dataset, reasonable metrics and comprehensive experiments) and your constructive feedback. We reply to your concerns below.\\n\\n**Practical scenarios of (single) fact unlearning (Reply to Weakness 1&2)**: Thank you for raising your confusion. Machine unlearning indeed has many use cases, such as concept removal and copyright protection, and in different use cases, the definition and corresponding methodology can be tailored differently. Our (single) fact unlearning is particularly important to address privacy risks. A realistic scenario can be: for celebrities, we want to unlearn their home addresses, but keep their public profiles such as the rewards and achievements. In the revised introduction (line 34-46 of the revised pdf), we have clarified the difference between fact unlearning and other variants of unlearning such as data removal and copyright protection and have introduced the detailed use case of fact unlearning.\\n\\n**The study of solutions (Reply to Weakness 4)**: We would like to thank you for expecting the study of new solutions. We agree that this is an important direction of future work and have discussed this in the last section. In this paper, we believe the problem definition, framework and evaluation themselves are already important. In addition, the solution study and problem proposal can have conflict of interest in one paper, in the sense that the problem proposal can potentially be flavored to the proposed solution \\u2013 that\\u2019s why we consciously made this decision.\"}", "{\"comment\": \"Thank you for your insightful comment about the WHP and real-world knowledge base! Please see our reply to your questions and concerns below.\\n\\n**The adaptation of WHP (Reply to Weakness 1 and Question 2)**: Thank you for raising this point. We agree that WHP is more suited for *concept or topic* unlearning, and there is a mismatch in use cases \\u2013 the use-case in this work is to unlearn the (single) fact. We have revised our intro (line 39-47 in the revision) to highlight the different use cases of unlearning in LLMs and added the insight of WHP provided by your review (line 424-428 in the revision) to emphasize WHP is more suited to concept or topic unlearning where the unlearning data is usually a large corpus. This further *motivates that for the use case of (single) fact deep unlearning, different and newer methods may need to be designed*.\\n\\nFor the experimental setup of WHP, we tried our best to adapt it to the task of single fact unlearning. As described in Appendix D, the reinforced model is finetuned with only the fact to be unlearnt. For reproducibility, we also attach this anonymous link (https://anonymous.4open.science/r/deep_unlearning_anonymous-2C73) including the cleaned code of all evaluated methods and the dataset in the revision (as shown in the abstract and experiment section).\\n\\nWhen we cleaned the code, we found an elusive bug in the Recall and Accuracy calculation code for TV and WHP specifically. After removing the bug and rerunning the code, we find the performance of TV looks better and that of WHP remains similar. We have updated the results in the revision and also released the cleaned code (in the link above) accordingly. \\n\\n**Why do we evaluate deep unlearning with synthetic facts rather than real-world facts? (Reply to Weakness 2)** We use synthetic data in order to *control* the evaluation experiments, removing potential factors for noisy evaluation such as (1) a partial observation of the underlying knowledge base in the LLM leading to a false sense of success and (2) different underlying knowledge bases across LLMs making it harder to draw consistent conclusions. For more details on these factors, kindly see line 287-301 of the revised paper. Therefore, to have better control in the evaluation, we decided to create the synthetic dataset, which is the popular option in other evaluation work as well [1, 2].\\n\\nThank you for pointing out a reference for how to determine if a fact is in the LLM. This can help alleviate the factor (1), but the evaluation noise can still exist because the public knowledge base itself is incomplete.\\n\\n**How would current unlearning methods perform on real-world fact unlearning? (Reply to Weakness 3 and Question 1)** Current unlearning methods would likely have numerically higher accuracy on real data (which can have an incomplete LLM knowledge base) than on our benchmark. However, the problem of deep unlearning in real data is actually more challenging \\u2013 because the unlearner would need to reason about missing facts and their probabilities, and decide if they might be in the knowledge base of the LLM. We have added this discussion in the revised pdf in Section 7.\\n\\n**The choice of threshold (Reply to Question 3)**: We choose the threshold of $0.8$ as an indication of relatively high value of accuracy and recall. We have also shown the curve of accuracy and recall in Figure 4. \\n\\n**Typos (Reply to Weakness 4)**: Thank you for pointing them out and thank your effort for carefully checking our Appendix too! We have corrected them accordingly in the revision.\\n\\n[1]Maini, Pratyush, et al. \\\"Tofu: A task of fictitious unlearning for llms.\\\" arXiv preprint arXiv:2401.06121 (2024).\\n\\n[2] Allen-Zhu, Zeyuan and Li, Yuanzhi. ``Physics of Language Models: Part 3.1, Knowledge Storage and Extraction.Forty-first International Conference on Machine Learning (2024)\"}", "{\"summary\": \"This paper presents a critique of current unlearning methods through the lens of fact unlearning - it shows that while current unlearning methods can unlearn the target fact in isolation, they often fail to unlearn other related facts through which the target fact can be logically deduced, thus negating the effect of unlearning. The authors refer to the task of unlearning additional facts which are logically related to the target fact, as \\\"deep unlearning\\\". They introduce 2 metrics based on set overlap: recall and accuracy, which quantify the extent of \\\"deep unlearning\\\". All experiments are conducted on a small, synthetic dataset derived from a synthetic knowledge base and a set of logical rules.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper extends the discussion on the \\\"illusion of machine unlearning\\\" vis-\\u00e0-vis current unlearning methods, to the relatively simple yet seemingly hard task of unlearning a single fact from the LLM's parametric memory, given that logically related facts are also present in the LLM. This work shows, somewhat surprisingly, that when current unlearning methods are tasked with erasing a single fact from the LLMs parametric knowledge, they do not erase related facts which also share entities/objects with the target fact, for e.g. erasing the target fact **F1**: Y is child of X, may fail to erase logically related facts such as: **F2**: Z is husband of X, and **F3**: Z is father of Y, even though Y as an entity appears in F3, and this would allow the erased fact to re-surface via logical deduction. On a synthetic toy dataset, they provide a reasonable set overlap based metric for quantifying the degree of such unlearning. While it is already known that most knowledge that is supposedly deleted by current unlearning methods can be re-surfaced via adversarial/jailbreak prompting and probing, this work highlights limitations of current methods at the atomic level of single-fact deletion, hinting at a tension between an LLM's reasoning capabilities and the effectiveness of unlearning techniques.\", \"weaknesses\": [\"Firstly, I find it a bit counter-intuitive that even an unlearning method such as WHP (Who's Harry Potter) would not suppress the effect of other facts which share entities with the target fact, such as in the case of facts related to familial connections where a target entity appears verbatim in other facts. This leads me to wonder if the failure to \\\"deeply unlearn\\\" a target fact is simply an artifact of the way in which the unlearning methods are applied e.g. WHP requires finetuning a reinforced model on a corpus that is much larger than just a handful of facts, in order for the logit difference with the baseline to be significant. The paper does not really discuss how each of the unlearning methods are actually applied, and whether any of the tested unlearning methods are even compatible with the setting of single fact unlearning - in my view, methods such as NPO, TV, and WHP are clearly more suited for a setting where the unlearning target is defined around a *concept* or *topic*, rather than single **fact**. A discussion around the compatibility of the evaluated unlearning methods with the single fact unlearning setting, and what role the small size of the synthetic dataset plays in limiting the effectiveness of unlearning methods, would be a useful addition. For example, if the WHP method is properly applied as it is designed i.e. for concept unlearning rather than fact unlearning, I imagine that it would successfully suppress the fact that \\\"Harry is child of Lily\\\" even if it was fine-tuned on other facts that mentioned that \\\"James is father of Harry\\\" and \\\"James is husband of Lily\\\". To help readers better understand the experimental setup and interpret the results, I recommend that the authors discuss the adaptations they made to apply WHP, NPO, TV etc., to the fact unlearning task.\", \"The synthetic dataset appears to be biased towards highly deducible relationships (e.g., familial connections) which is not really representative of real-world knowledge structures where logical connections are typically more complex and less *deducible*. In my view, the authors should have included an evaluation on a dataset extracted from a real-world knowledge base with partial/noisy/incomplete logical connections between facts - my guess is that since most real facts require more than just simple deductive reasoning e.g. may require multi-hop reasoning, this problem of knowledge deductibility or fact reconstruction would be less of an issue even with current unlearning methods. While the authors claim that it is hard to conduct prompt based evaluations for determining if a fact is in the LM (false negatives), I'd like to point out that alternatives to the simple prompts used by the authors (as shown in Table 1) do exist (see [1]), such as MCQ-based binary choice Q&A or latent supervised/unsupervised probing. It would also useful have been to see how the \\\"approximate\\\" recall and accuracy metrics will scale to real KBs where many of the logical connections between facts are unknown.\", \"The authors should discuss the limitations of their current dataset and how these might affect the generalizability of the results.\", \"Minor typos: typo in appendix C, should be \\u201calgorithm 3\\u201d instead of \\u201cAlgorithm 9\\u201d. Caption of Figure 7 should also say \\u201calgorithm 3\\u201d instead of \\u201calgorithm 1\\u201d. Typos in each caption of subfigures in figure 8: should say \\u201cMinimal Deep Unlearning\\u201d sets.\", \"[1] [Eight Methods to Evaluate Robust Unlearning in LLMs](https://arxiv.org/abs/2402.16835)\"], \"questions\": \"Q1: How would the accuracy and recall metrics change w.r.t. to the # of minimal deep learning sets discovered when applied on a KB with incomplete/partial logical dependencies? Would recall increase since the minimal deep unlearning set is smaller and closer in size to the actual set unlearned by the algorithm, thus having higher overlap? What about accuracy?\", \"q2\": \"Given the small size of the target facts to be unlearnt, how do you satisfy the fine-tuning data requirements of methods such as NPO & WHP?\", \"q3\": \"What motivated 0.8 as threshold value for comparing recall/accuracy (Figures 3 and 6)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer wbf1,\\n\\nThank you so much for increasing your recommendation!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"comment\": \"Thanks to the authors for revising the paper to incorporate the suggested discussions. I have updated my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper introduces \\\"deep unlearning\\\" as a novel approach in the domain of large language models (LLMs), emphasizing its importance in effectively erasing certain facts. As the target fact can be deduced from logical rules, superficial unlearning methods, which solely unlearn the target fact, cannot unlearn it successfully. This work presents an important and novel problem - deep unlearning that considers logical reasoning between facts. The authors also present new metrics, including Accuracy and Recall, to evaluate this process, backed by experiments across various LLMs and unlearning methods.\\n\\nThere were several non-trivial issues raised in the reviews. First, the experiments did not compare the utility performance of a vanilla model with the unlearned model. Second, there have been concerns regarding the limited studied scope that was merely focusing on logical rules. Third, there were also concerns among the reviews regarding the quality of presentation. Some (not all) of the issues were incorporated into the revision.\", \"additional_comments_on_reviewer_discussion\": \"ome (not all) of the issues were incorporated into the revision.\"}", "{\"comment\": \"Dear reviewers,\\n\\nThank you for your time and feedback, and we would be happy to answer further questions if there are still any concerns. Please let us know if any additional clarifications we can provide!\\n\\nBest Regards,\\n\\nAuthors\"}" ] }
CI9JMBAsPg
DocGenome: A Large Benchmark for Multi-Modal Language Models in Real-World Academic Document Understanding
[ "Renqiu Xia", "Song Mao", "Xiangchao Yan", "Hongbin Zhou", "Bo Zhang", "Haoyang Peng", "Jiahao Pi", "Daocheng Fu", "Wenjie Wu", "Hancheng Ye", "Shiyang Feng", "Mingsheng Li", "Bin Wang", "Chao Xu", "Conghui He", "Pinlong Cai", "Min Dou", "Botian Shi", "Sheng Zhou", "Yongwei Wang", "Bin Wang", "Junchi Yan", "Fei Wu", "Yu Qiao" ]
Scientific documents record research findings and valuable human knowledge, comprising a vast corpus of high-quality data. Thus, leveraging multi-modality data extracted from these documents and assessing large models' abilities to handle scientific document-oriented tasks is meaningful. Despite promising advancements, large models still perform poorly on multi-page scientific document extraction and understanding tasks, and their capacity to process within-document data formats such as charts and equations remains under-explored. To address these issues, we present DocGenome, a structured document dataset constructed by annotating 500K scientific documents from 153 disciplines in the arXiv open-access community, using our custom auto-labeling pipeline. DocGenome features four characteristics: 1) Completeness: It is the first dataset to structure data from all modalities including 13 layout attributes along with their LaTeX source codes. 2) Logicality: It provides 6 logical relationships between different entities within each scientific document. 3) Diversity: It covers various document-oriented tasks, including document classification, visual grounding, document layout detection, document transformation, open-ended single-page QA and multi-page QA. 4) Correctness: It undergoes rigorous quality control checks conducted by a specialized team. We conduct extensive experiments to demonstrate the advantages of DocGenome and objectively evaluate the performance of current large models on our benchmark.
[ "Scientific document structuring", "Document understanding", "Chart Table and Equation Understanding" ]
Reject
https://openreview.net/pdf?id=CI9JMBAsPg
https://openreview.net/forum?id=CI9JMBAsPg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xUtvUsGd28", "x3Ny7UdpsN", "woTo2rgiJU", "v98mVClJGw", "saFR9MOpze", "qEm6RciuUv", "gYKBN6L9my", "gLzgqCxBbg", "e9MCoKEb4h", "YU8IepoOn4", "TRm0audWG1", "RNcJgj6zCG", "NpZmxd1Iz5", "KMG8wgrD9T", "FwcUGu7sNV", "AzOKiuq3SR", "9yv4l4h4WE", "94s7txtE3y", "1OZ82IwGj9" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730610671372, 1733186073416, 1732515973401, 1732369686569, 1734961964546, 1732497854577, 1730660057502, 1730691882340, 1732508588165, 1737523796449, 1732418706515, 1732369977475, 1732799266661, 1732796292705, 1732390619847, 1732494981103, 1732369464672, 1732418841140, 1730870764636 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6842/Reviewer_i9VG" ], [ "ICLR.cc/2025/Conference/Submission6842/Authors" ], [ "ICLR.cc/2025/Conference/Submission6842/Authors" ], [ "ICLR.cc/2025/Conference/Submission6842/Authors" ], [ "ICLR.cc/2025/Conference/Submission6842/Area_Chair_diNG" ], [ "ICLR.cc/2025/Conference/Submission6842/Authors" ], [ "ICLR.cc/2025/Conference/Submission6842/Reviewer_4Uws" ], [ "ICLR.cc/2025/Conference/Submission6842/Reviewer_TmKL" ], [ "ICLR.cc/2025/Conference/Submission6842/Reviewer_TmKL" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6842/Authors" ], [ "ICLR.cc/2025/Conference/Submission6842/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6842/Authors" ], [ "ICLR.cc/2025/Conference/Submission6842/Reviewer_i9VG" ], [ "ICLR.cc/2025/Conference/Submission6842/Reviewer_TmKL" ], [ "ICLR.cc/2025/Conference/Submission6842/Authors" ], [ "ICLR.cc/2025/Conference/Submission6842/Authors" ], [ "ICLR.cc/2025/Conference/Submission6842/Reviewer_c3jL" ] ], "structured_content_str": [ "{\"summary\": \"This work presents a large-scale multimodal academic document understanding dataset. It includes training and high-quality test sets, along with 7 benchmarking tasks proposed. The manuscript provides a clear description of the data collection and quality control processes. A series of large multi-modal models are benchmarked on the proposed test set, and training experiments are conducted to verify the effectiveness of the proposed training set.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed large-scale multimodal academic document understanding dataset makes a solid contribution to the related research community, especially given the scarcity of such datasets.\\n\\n2. The dataset collection and quality control are carefully designed, executed, and documented. The provided anonymous GitHub repository includes detailed documentation and links for downloading the dataset, supporting reproducibility and accessibility.\\n\\n3. A training set is provided along with the test set, and training experiments are conducted to verify the effectiveness of the training set.\", \"weaknesses\": \"1. The QA data creation process depends heavily on GPT-4. As the QA pairs are generated by GPT-4, it can introduce biases regarding the type and difficulty of the questions. Further examinations into the potential biases would therefore be beneficial.\\n\\n2. The evaluation metrics used for different tasks could be improved. Specifically, edit distance or BLEU may not accurately evaluate Equation-to-LaTeX and Table-to-LaTeX tasks, as these metrics do not account for the semantic equivalence of different LaTeX expressions. Additional evaluation could also be performed to verify the grammatical correctness of generated LaTeX expressions. Moreover, Open-ended QA tasks are evaluated using GPT-4 to compare reference and generated answers. While this is likely a reasonable approach, human evaluation to verify the reliability of GPT-4\\u2019s judgment would be beneficial.\", \"questions\": \"1. The manuscript discusses that the GPT-4-generated QA pairs are verified and updated by human annotators. What is the acceptance rate of the original QA pairs, or alternatively, what is the editing rate? Given that GPT-4 achieves around 60%-70% accuracy on QA tasks in Table 3, this suggests that a substantial portion of the QA pairs were likely updated.\\n\\n2. Why are large multimodal models (e.g., GPT-4V, QWen-VL) not benchmarked on the Equation-to-LaTeX and Table-to-LaTeX tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"ICLR Reminder of Reviewer 4Uws\", \"comment\": \"Dear Reviewer 4Uws,\\n\\nWe have thoroughly addressed your concerns and look forward to your feedback, which will greatly assist us in making any necessary revisions or clarifications.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\nAuthors of paper 6842\"}", "{\"title\": \"Reply to Reviewer TmKL, Round 3\", \"comment\": \"Dear Reviewer TmKL,\\n\\nThank you for your further insightful questions. We appreciate the opportunity to clarify the details regarding the SciHub dataset and our comparison with DocXChain.\\n\\n>***Q1. SciHub Dataset Details***\\n\\nWe sampled the entire SciHub dataset and created a subset of 966 papers, covering disciplines such as medicine, chemistry, biology, and humanities **as shown in the following table**, in order to evaluate the generalization capability of our models across a wider spectrum of disciplines. While the exact distribution of fields within the SciHub dataset varies, it provides a diverse representation that complements the areas covered by arXiv. [This figure](https://postimg.cc/G8NQ6sLX) visualizes some paper examples from our constructed SciHub dataset.\\n\\nRegarding overlap with arXiv, only a very small portion of disciplines, such as computer science, intersect with arXiv's fields, and these account for **less than 3% of the data**. On the other hand, as you mentioned, our primary focus was on testing our approach in scientific fields where arXiv's coverage is highly limited, thereby highlighting the versatility and robustness of our document parsing method.\\n\\n||Medicine|Chemistry|Biology|Humanities|Physics|Engineering|Math|Ecology|Computer Science|Economics|Geography|\\n|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|Amount|237|159|150|121|84|67|36|35|27|25|25|\\n|Proportion|24.53%|16.46%|15.53%|12.53%|8.70%|6.94%|3.73%|3.62%|2.80%|2.59%|2.59%|\", \"table\": \"The discipline distribution of the 966 papers from our constructed Scihub dataset\\n\\n\\n>***Q2. Comparison with DocXChain***\\n\\nDocXChain, a highly popular tool currently, is designed to handle a wide range of open-domain document parsing tasks. While it is a practical tool rather than a peer-reviewed research publication, our goal was to benchmark its performance against existing tools, demonstrating significant improvements in accuracy, particularly for papers outside the arXiv domain. This comparison is specifically designed to address the question of whether our approach can achieve a certain level of effectiveness, **when applied to scientific documents from other disciplines**. It is indeed evident from Table 6 of the main text, that our method can be applied to the Scihub domain.\\n\\nWe have supplemented the corresponding dataset information in the [revised paper](https://openreview.net/pdf?id=CI9JMBAsPg), and please refer to Table 7 in our main text. We hope this additional information addresses your questions and provides a clearer understanding of our work. We are grateful for your feedback, which has been invaluable in refining our manuscript.\\n\\nThank you once again for your thoughtful review and consideration.\\n\\n&nbsp;\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer 4Uws: Part-2\", \"comment\": \"***W2. More insights/experiments on layout information and logical relationships***\\n\\nWe deeply appreciate the reviewer\\u2019s thoughtful feedback and the time and effort dedicated to evaluating our paper. We propose the following insights and experiments: \\n\\n- **Better layout detection aids in document understanding:** We have already included this part in Section 5.4 of our original paper. For questions related to figures or tables, we directly annotated the detection boxes for figures or tables on the document images to serve as the image input for the QA task. Taking internVL-1.5 as an example, when the images of the papers contain layout information (this information can be considered as a prompt) relevant to the questions, the performance of the QA task can be further enhanced.\\n\\n|Model|#Params|With Layout Information in Imgae|Sigle-Page QA|Multi-Page QA|\\n|-|:-:|:-:|:-:|:-:|\\n|InternVL-1.5|26B|&#10007;|0.4529|0.3577|\\n|InternVL-1.5|26B|&#10004;|**0.4922**|**0.4030**|\\n\\n- **The relationships between document elements facilitate the expansion of multimodal data:** \\nUtilizing the relation information between entities, we can index associated modal information for specific tasks, enabling data re-annotation and expansion. For instance, in constructing table-related QA tasks, we can not only obtain images of tables but also index the text describing the tables in the document, thereby enriching multimodal information for re-annotation and task expansion. The relationships make our annotation information more flexible and actionable.\\n\\n\\n***W3. \\u00a0Concerns about variance and the structure of document due to LaTeX***\\n\\nWe sincerely appreciate the reviewer\\u2019s comments and the valuable time spent reviewing our paper. As mentioned by the reviewer, in Table 6 (Line 458), Mathpix demonstrates slightly better performance on Out-Of-Distribution (OOD) data, compared to the model trained using our DocGenome, e.g., 0.4873 vs. 0.6627.\\n\\nHowever, a direct comparison between our model and Mathpix may not be entirely fair, as Mathpix is a commercial software developed with an investment of hundreds of millions of dollars. **In contrast, our DocGenome is an automated annotation pipeline that constructs datasets without incurring any cost**. On the other hand, the diversity of our proposed DocGenome dataset in scientific literature is well ensured, thanks to the wide range of document layouts provided by the arXiv open-access community, which covers 153 academic disciplines. To validate this diversity, we conducted experiments from two key perspectives, as outlined below:\\n\\n- **Page-level diversity**: Supplementary materials Figure A.1 demonstrate the diversity of the page-level data distribution.\\n- **Layout diversity**: Table 6 presents experiments conducted on a different domain, SciHub. The results demonstrate that models trained using the layout information from DocGenome outperform DocXChain and achieve significant improvements. This further highlights the diversity of the layout information in the DocGenome dataset.\\n\\n\\n***Q1. More details in terms of the inter-annotator agreement or additional details.***\\n\\nIn detail, the process is as follows: \\n1. GPT-4 was used to generate 7028 questions for 1757 paper samples. \\n2. Quality checkers first examine the questions, retaining or modifying them to obtain correct questions.\\n3. Each question was then allocated to two quality checkers for review and correction. \\n4. The checkers attempted to correct incorrect answers and assigned confidence scores. \\n5. Only QA pairs with the same answer and the highest confidence scores from both checkers were retained for the final dataset.\\n\\nFinally, 2498 QA pairs were retained to form the QA test set, of which 1672 were modified by the quality checkers.\\n\\nWe have already provided confidence score criteria in Appendix E.\\n\\n\\nThank you once again for your valuable feedback. We have revised our paper based on your suggestions, with all modifications highlighted in orange. Your input has been immensely helpful in enhancing the quality of our work.\"}", "{\"metareview\": \"This paper introduces DocGenome, a dataset curated through automated methods consisting of a large number of samples representing the structured content of academic papers. To create this dataset, it describes a pipeline developed to automatically parse LaTeX source files. To evaluate the dataset, the paper includes a test suite of seven tasks. These tasks cover vision-only, language-only, and multi-modal challenges, providing a framework to assess model performance on DocGenome.The paper also demonstrates that training on DocGenome data can improve model performance on downstream tasks, showing its utility for research in document understanding and related fields.\\n\\nReviewers generally found the paper well-written and the released dataset valuable for the research community. They appreciated the quality control measures implemented during dataset creation and acknowledged the usefulness of the training set. \\n\\nHowever, a significant critique focused on the paper's methodological and technical contributions, as well as the dataset's novelty, which reviewers considered relatively limited (4Uws, c3jL, TmKL). Some reviewers also expressed concerns regarding limited coverage of non text modalities (c3jL). Concerns were also raised about the automated construction process of the dataset, particularly its reliance on GPT-4 (i9VG, 4Uws). Additional issues included a lack of experiments addressing novel aspects of the dataset, potential biases in its creation, the emphasis on LaTeX formatting, and requests for further details\\u2014all of which were largely resolved during discussions. \\n\\nOverall, while the resource offers value to the community, its technical contributions and novelty appear limited. The authors' response summarized the weaknesses raised by reviewers as a separate comment, but notably omitted addressing these major critiques.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised questions about the bias and diversity of the data created by LLMs. In response the authors performed additional experiments.\\nReviewers asked for additional details on the specifics of data quality measures, the annotators, parts of the pipeline (e.g., QA generation), and the OOD dataset. The author response addresses all these issues. \\nIn response to one question about details of faculty used for data inspection, the response provides exact number and hours, but does not provide details of the approximate range of compensation used for data collection. \\nIn response to questions about additional experiments, the response includes several additional experiments (e.g., out of domain generalization, and experiments on Eqaution-to-LaTeX and Table-to-LaTeX tasks).\"}", "{\"title\": \"Reply to Reviewer TmKL, Round 2\", \"comment\": \"Dear Reviewer TmKL,\\n\\nThank you for your continued engagement and for clarifying your concerns regarding our manuscript. We understand that your question centers on the representativeness of LaTeX-created articles across all scientific disciplines, rather than the importance of LaTeX itself.\\n\\nWe acknowledge that while arXiv covers a substantial number of fields, it does not encompass all areas of research, such as civil engineering, chemistry, and materials science. We appreciate your point that conferences and journals in these fields often offer both LaTeX and Word templates, which reflects the diversity of preferences within the scientific community.\\n\\nTo address your concern about representativeness, we would like to emphasize that our work primarily focuses on fields where LaTeX is predominantly used, as evidenced by its widespread adoption in journals and conferences within these domains. While our dataset may not cover every scientific discipline, it provides a comprehensive representation of those areas where LaTeX is the standard.\\n\\nRegarding your question, **\\\"how representative LaTeX-created articles are,\\\"** we have provided a specific implementation plan in the main text as follows:\\n\\n- For disciplines not covered by the arXiv open-access community, such as civil engineering, chemistry, and materials science, we assume that we can collect their PDF data;\\n\\n- We can leverage the models **trained using our proposed DocGenome dataset** to perform the inference processing on these out-of-distribution disciplines, such as civil engineering, chemistry, and materials science, thereby achieving generalization to these datasets;\\n\\n- Actually, **we have conducted additional experiments to evaluate the generalization ability of our approach to other disciplines**. Specifically, we collected PDF data from the Scihub domain, which encompasses a broader range of scientific fields, including civil engineering, chemistry, and materials science not fully represented in arXiv. Using our DocGenome-trained models, we applied our scientific document parsing method to the Scihub domain. As shown in Table 6 of the main text, our method has demonstrated strong performance in scientific document parsing for Scihub domain, even when applied to diverse disciplines outside the core focus of arXiv. Notably, our approach outperforms existing layout detection tool, such as DocXChain, in terms of accuracy and robustness.\\n\\n\\n| Model | [email protected]:0.95\\u2191 | Title | Text | Figure | Caption | Equation | Table | Footnote |\\n|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|DocXChain (Open-source Toolchain for Document parsing) | 37.99 | 32.53 | 59.00 | 67.17 | 38.71 | 12.98 | 38.99 | 16.54 |\\n|YOLOv8 (Trained using the proposed DocGenome) | 50.15 | 42.59 | 64.87 | 56.65 | 64.51 | 47.14 | 47.08 | 28.21|\", \"table_caption\": \"Results on Scihub domain, which shows the generalization ability of the proposed DocGenome on other disciplines outside the core focus of arXiv.\\n\\nFinally, **we would like to emphasize** that we also recognize the value of diverse documentation tools and formats in scientific publishing. Our intention is not to diminish the importance of other formats but to highlight the strengths and capabilities of LaTeX in the contexts where it is most prevalent.\\n\\nWe have carefully revised our paper in response to your second-round questions and highlighted the changes in orange. Please refer to page 9 and Table 6. Looking forward to your response and further discussion.\"}", "{\"summary\": \"The main contribution of this paper is a dataset called DocGenome that\\nis curated mostly automatically.\\n\\n1. The main component of paper is a dataset 500k samples of structured\\n representation of academic papers. The author developed a pipeline\\n to automatically parse the latex source.\\n\\n2. The authors also curate a test set of 7 tasks (vision-only,\\n language-only, and multi-modal) to evaluate the performance of\\n models on the dataset.\\n\\n3. The author also verified that the training data could be helpful and\\n lead to better performance on downstream tasks via training on\\n DocGenome data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is nicely written, and there authors have conducted a lot\\n of experiments evaluating different aspects of this dataset.\\n\\n2. The dataset, as well as the document conversion pipeline, could be a\\n helpful resource for the community.\", \"weaknesses\": \"1. Overall this paper feels like a resubmission from venues like the\\n NeurIPS dataset and benchmark track. The methodology contribution is\", \"relatively_weak\": \"the method of creating the dataset (i.e., parsing\\n latex sources) is not novel (e.g., a lot of earlier work like GROTOAP, PubLayNet, DocBank, all use similar approaches ),\\n neither is the framing of the tasks (all these tasks have been\\n studied before to some extent). I am inclined to a borderline reject\\n unless the authors can give a strong statement in terms of the\\n novelty of the dataset or the methodology.\\n\\n2. While I find there could be one potentially novel aspect of the\\n dataset---the \\\"13 categories of component units and 6 types of\\n logical relationships between them\\\", it is not clear how the authors\\n actively use this component in the experiments. (The empirical\\n studies mostly focus on using individual components but not the\\n relationships.) I'd encourage the authors to provide more\\n insights/experiments on this.\\n\\n3. There's another limitation of the datasets: since it is curated via\\n automatically parsing latex sources, the variance and the structure\\n could be limited, and the trained models might not be able to\\n transfer to other types of papers (which is also mentioned by the\\n authors in line 458.)\", \"questions\": \"1. The author mentioned that \\\"Each QA pair is reviewed by three\\n reviewers for cross-verification.\\\" (line 310) It would be great if\\n the authors can provide more details in terms of the inter-annotator\\n agreement or additional details. \\\"Finally, the two\\n manually-evaluated results, along with the automatically-evaluated\\n result are cross-verified with the original text to ensure accuracy\\n and consistency\\\" (line 317) I am not sure how this process works,\\n can you provide more details?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Long document understanding is an interesting and complex problem. The paper provides a document processing pipeline and a large and diverse collection of documents for academic document understanding. The paper also evaluates the performance of large models on multiple tasks using the benchmark dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"As shown in Table 1 of the paper, the document collection is superior to existing collections on multiple aspects.\", \"weaknesses\": \"1. While datasets and benchmarking are important to drive research, the paper does not contain research contributions.\\n2. The documents are based on latex formatted documents which is a subclass of all documents.\", \"questions\": \"1. Is the collection representative of scientific documents? In other words, does latex formatting constrain documents in some way? For example, latex denotes non-existing references by ??. If a document is created using other formatting software, such mistakes are not flagged.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"SciHub and DocXChain\", \"comment\": \"Thanks for providing the clarifying information. I have the following questions.\\n\\n1. What are the details of SciHub dataset? Do we know which scientific fields are represented in the dataset? Also, any sense for overlap with arXiv?\\n\\n2. DocXChain: DocXChain is a useful tool but not a peer-reviewed research paper, as far as I can tell. How do we know if it represents the state-of-the-art?\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reply to Reviewer 4Uws: Part-1\", \"comment\": \"| Modality Class | Amount of Data | Data Form | Data Description|\\n|-|:-:|:-:|:-:|\\n|Algorithm|112,706| Image-LaTex pairs | algorithmx package in LaTex |\\n|Image-Caption | 4,404,530 | Image-Text pairs | Images in paper and the corresponding caption|\\n|Figure-TiKZ |4,612,156| Image-TiKZ code pairs |Flowcharts, geometric diagrams, and other images compiled using tikzpicture along with their corresponding TiKZ representations |\\n| Equation-Latex | 11,784,597 | Image-LaTex pairs | Displayed formula |\\n| Footnote |22,957| Image-Text pairs |Footnotes appearing in the paper |\\n| List | 905,843 | Image-LaTex pairs | Listed in an enumerated format |\\n| Table | 678,931 | Image-LaTex pairs | Tables formatted in LaTeX |\\n| Text | 16,310,161 | Image-Text pairs | Plain text regions in the paper |\\n| Title | 6,191,633 | Image-LaTex pairs | Headings at all levels |\\n| Text-EQ | 27,282,610 | Image-LaTex pairs | Text regions that includes inline formulas |\\n| Code | 50,849 | Image-Code pairs | Actual code regions, such as Python, etc |\\n| Abstract | 689,449 | Image-Text pairs | Abstract region in the paper |\\n\\n----------------\\n\\n\\nWe sincerely appreciate your review and feedback on this paper. Although similar works (such as GROTOAP, PubLayNet, and DocBank) have also adopted methods to parse LaTeX source code, ***we would like to further clarify the uniqueness and contributions of DocGenome through the following points***:\\n\\n- As summarized in the table above, our proposed DocGenome not only captures the layout information of each modality but also allows for the customized extraction of each complex modality, such as Table, Equation, Text-EQ, Title, and Abstract. This approach comprehensively preserves all the information in a paper and facilitates the development of downstream applications, such as [equation extraction](https://github.com/opendatalab/UniMERNet), [document content extraction](https://github.com/opendatalab/MinerU), [table content extraction](https://github.com/UniModal4Reasoning/StructEqTable-Deploy) [automatic survey](https://github.com/AutoSurveys/AutoSurvey), etc.\\n\\n- By utilizing Table-latex pairs and Equation-latex pairs, we can develop effective parsers that convert table and equation images into markdown or LaTeX source code. For example, **to the best of our knowledge, the recently-popular open-source repository [minerU](https://github.com/opendatalab/MinerU) leverages our DocGenome dataset to achieve this functionality.**\\n\\n- Using the Title and Abstract, we can conduct research on [automatic paper survey](https://github.com/AutoSurveys/AutoSurvey) task, aiming to generate high-quality academic surveys based on the title and abstract of a given paper.\\n\\n- DocGenome also includes multiple modality categories, such as flowcharts and Image-caption pairs (shown in [this figure](https://postimg.cc/Vd92rS2n)), which can serve as corpora for tools like image-to-text generation tasks and geometric question-answering models.\\n\\n- Compared to prior works like GROTOAP, PubLayNet, and DocBank, our proposed DocGenome offers a significantly larger dataset (**6.8 million page-level structured data points** and **72 million region-level structured data points**) along with more diverse structured representations, such as Table-latex and Figure-TiKZ. To the best of our knowledge, it stands as the largest and most diverse scientific document dataset in terms of both scale and variety.\\n\\nTherefore, DocGenome demonstrates significant advantages in terms of data quantity, data diversity, data value, and data quality. We believe that DocGenome will serve as a critical resource for advancing multimodal document parsing and related research areas, providing strong support to the community and inspiring more innovative applications and research outcomes in the future.\\n\\n***To address your concerns, we have further revised our paper and marked the changes in orange text.***\\n\\nAt last but not least, we would like to take this opportunity to thank the anonymous reviewer again for the insightful comments and valuable suggestions, which greatly helped to improve the technical quality and the presentation of this manuscript. We sincerely hope that our response has addressed your concerns. We would greatly appreciate it if you could reconsider the contributions of our proposed DocGenome to the community and its potential value for various academic research tasks in the future.\"}", "{\"title\": \"Reply to Reviewer i9VG\", \"comment\": \"***W1. Concerns about type and difficulty of the questions generated by GPT-4V***\\n\\nThank you for your observation. As detailed in the appendix, we initialized QA pair generation with GPT-4 using a diverse set of QA examples for few-shot prompting. Additionally, Moreover, our quality control team, comprising 20 individuals with PhD or Master\\u2019s degrees, not only corrects answers but also edits incorrect questions to ensure the quality and difficulty of the QA pairs.\\n\\n***W2. The evaluation metrics used for different tasks could be improved.***\\n\\nWe also appreciate your suggestion, and we are indeed working on designing more reasonable evaluation metrics. We\\u2019ve provided [the figure](https://postimg.cc/dZ2Kymwx) to include our ongoing research on a novel metric design for the Equation-to-LaTeX task, which evaluates both LaTeX text similarity and region matching between the predicted LaTeX-rendered equation and the original image.\\n\\n***Q1. More details about QA pairs generation and quality ensurance***\\n\\nIn detail, the process is as follows: \\n1. GPT-4 was used to generate 7028 questions for 1757 paper samples. \\n2. Quality checkers first examine the questions, retaining or modifying them to obtain correct questions.\\n3. Each question was then allocated to two quality checkers for review and correction. \\n4. The checkers attempted to correct incorrect answers and assigned confidence scores. \\n5. Only QA pairs with the same answer and the highest confidence scores from both checkers were retained for the final dataset.\\n\\nFinally, 2498 QA pairs were retained to form the QA test set, of which 1672 were modified by the quality checkers. **The editing rate is 66.93%**\\n\\nWe have already provided confidence score criteria in Appendix E.\\n\\n***Q2. General VLMs on the Equation-to-LaTeX and Table-to-LaTeX tasks***\\nWe\\u2019ve included additional experiments with a general VLM on *Eqaution-to-LaTeX* and *Table-to-LaTeX* tasks. Taking Qwen2VL-7b as an example, its performance is not only inferior to the commercial closed-source model Mathpix but also falls short of our EqVLm and TableVLm trained on DocGenome.\\n\\n- **Eqaution-to-LaTeX:**\\n\\n|Model|Edit Distance|Jaccard Similarity|BLEU|Cosine Similarity|\\n|-|:-:|:-:|:-:|:-:|\\n|Mathpix|0.4738|0.7226|0.6045|0.4472|\\n|Qwen2VL-7b|0.5824|0.6979|0.5506|0.1449|\\n|EqVLM-B|**0.2111**|**0.8736**|**0.8621**|**0.6352**|\\n\\n- **Table-to-LaTeX:**\\n\\n|Model|Edit Distance|Jaccard Similarity|BLEU|Cosine Similarity|\\n|-|:-:|:-:|:-:|:-:|\\n|Mathpix|0.4436|0.7730|0.5826|0.3528|\\n|Qwen2VL-7b|0.4876|0.7598|0.6979|0.4016|\\n|TableVLM-B|**0.2223**|**0.8997**|**0.8800**|**0.5552**|\\n\\nAt last but not least, we would like to take this opportunity to thank the anonymous reviewer again for the insightful comments and valuable suggestions, which greatly helped to improve the technical quality and the presentation of this manuscript. We sincerely hope that our response has addressed your concerns.\"}", "{\"revert_withdrawal_confirmation\": \"We approve the reversion of withdrawn submission.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thank you for your response, which addressed most of my concerns. It would be great if these responses could be reflected in the revised version (if they are not already). I've adjusted my assessment accordingly.\"}", "{\"title\": \"Thanks, and ...\", \"comment\": \"I thank the authors for responding to my feedback. The authors have highlighted the central role LaTeX plays in scientific publishing. My question was not on the importance of LaTeX but on how representative LaTeX created articles are. Per arXiv's statement \\\"arXiv is a free distribution service and an open-access archive for nearly 2.4 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.\\\" This covers 8 areas. There are many more areas which are not covered, civil engineering, chemistry and materials science to name a few.\\n\\nEven in these areas covered by arXiv, the conferences and journals provide both LaTeX and Word templates.\\n\\nI believe there is a misunderstanding in understanding my question \\\"Is the collection representative of scientific documents?\\\".\\n\\nIn full disclosure, I have used LaTeX for all my papers and my comment does not stem from ignorance or under-appreciation of LaTeX.\\n\\nI once again thank the authors for responding to my comment.\"}", "{\"title\": \"Reply to Reviewer TmKL\", \"comment\": \"We deeply appreciate the reviewer\\u2019s thoughtful feedback and the time and effort dedicated to evaluating our paper. We will address the reviewer\\u2019s concerns regarding the effectiveness of using LaTeX format in the following part.\\n\\nLaTeX is the de facto standard for scientific writing, particularly in STEM fields, due to its ability to produce professional, high-quality documents. It is widely adopted by top journals and conferences, such as IEEE, ACM, and Springer. **Note that** many top journals and conferences require or recommend submitting manuscripts in LaTeX.\", \"latex_format_has_the_following_key_advantages\": \"- **Superior Mathematical Typesetting**: LaTeX excels in handling complex equations, ensuring precision and aesthetic consistency.\\n\\n- **Extensibility**: Thousands of packages (e.g., amsmath, graphicx, biblatex) allow customization for various academic needs.\\n\\n- **Graphics and Visualization**: Tools like TikZ and PGFPlots enable seamless creation of high-quality, scalable graphics and plots. By leveraging the proposed DocGenome, we can extract the TiKZ code and the corresponding rendered image, as illustrated [in this figure](https://postimg.cc/Vd92rS2n).\\n\\n- **Portability and Collaboration**: As a plain-text system, LaTeX supports version control (e.g., Git) and works across platforms (Windows, macOS, Linux).\\n\\n**On the other hand**, the arXiv community hosts papers under **the CC license**, and all papers are represented in LaTeX format. As a result, the structured scientific literature we obtain based on arXiv (using LaTeX code) also complies with the CC license, which helps to widely promote the dissemination and use of our open-source dataset (DocGenome). Moreover, by leveraging LaTeX code, we can automatically extract annotated structures from 600,000 scientific papers without incurring any human-annotation costs.\\n\\n**Overall, the collection is representative of scientific documents**, as LaTeX is the preferred tool for academic writing in STEM fields due to its precision and professional formatting. While LaTeX does impose strict formatting rules, these are not constraints but rather mechanisms to ensure accuracy and consistency. **For instance, the \\\"??\\\" marker for missing references serves as a clear indicator of errors, prompting authors to address them before finalizing the document**. This feature actually enhances the quality of scientific writing by reducing the likelihood of overlooked mistakes. In contrast, other formatting software may not flag such issues, potentially leading to incomplete or inconsistent references. As a result, LaTeX\\u2019s strictness contributes to its reputation as a standard for rigorous scientific documentation.\\n\\n\\nAt last but not least, we would like to take this opportunity to thank the anonymous reviewer again for the insightful comments and valuable suggestions, which greatly helped to improve the technical quality and the presentation of this manuscript. We have further supplemented some descriptive languages regarding LaTeX's representational ability in the main text. Hope our response has addressed your concerns.\\n\\n\\nIt is appreciated that you could reconsider the representational ability and extensibility of our work, as well as our contributions to this community. Thank you.\"}", "{\"title\": \"Response to Reviewer i9VG\", \"comment\": \"We sincerely appreciate your thoughtful feedback and recognition of the contributions of our work. Thanks!\"}", "{\"summary\": \"The paper proposed a new dataset for academic document understanding. By processing 500k documents from arXiv (using their source LaTeX files) the authors create a new large dataset which covers diverse disciplines, preserves data in all modalities from papers, and covers 7 document-oriented tasks.\\nThe paper describes its automatic processing and labeling pipeline, along with the two metrics used for quality assurance. The authors then describe how the dataset is split into train and test subsets; where the data is divided into tiers based on the previously mentioned metrics and 1004 papers are sampled from the top tier. \\nThe sampled papers are them used to create QA pairs about their content (both single-page and multi-page questions) using GPT-4V, which are them validated by professional faculty members.\\nThe paper also present the benchmarking of different LLMs on its test set, along with the usage of its training data to effectively create new models that outperform selected baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents a new large dataset of multi-modality academic documents for document-understanding tasks. This is a welcome and useful target as most existing datasets are limited in scope, diversity, and especially do not preserve all modalities of data in the source papers, losing important information.\\n\\nThe process to create the dataset is well presented and pragmatic and both the toolset and processed data can be valuable in themselves or for further collections and annotations for new tasks. \\n\\nWhile somewhat small when compared to the overall collected data, the annotated test set with questions is shown to be already a useful benchmark for recent large multi-modality models.\", \"weaknesses\": \"Unfortunately, the dataset also presents somewhat limited novelty and impact.\\n\\nIn its current described form, one of its main contributions is the created test set with annotated QAs. However, these seem to have limited coverage of non text modality QA and the annotated relationships are limited to layout level, not focusing on critical data/information relationships. As a dataset targeting multi-modality models, it is critical that these are emphasized.\\n\\nThis is especially the for charts/plots and tables, which are rich in information and relationships. I missed seeing some analysis on the modalities and specific discussions on issues and how the dataset addresses them.\", \"questions\": \"With so many different disciplines covered in the data, how exactly were the faculty members selected to review the annotated dataset?\\n\\nWill the data and codebase be released under a permissive license? With the lack of explicit table-QA or image-QA/chart-QA annotations, for example, it would be critical that the data can be used for re-annotation and extensions.\\n\\nIn line 161 you say \\\\ref commands are removed, but these seem essential in relationship extraction. Also, Table 2 implies they are indeed used.\\n\\nIf GPT-4V was used to create all questions, is there a reason its performance in Table 3 is not so high and even worse than GPT-4o?\\n\\nHow many QA pairs were actually created and used? My understanding is that 4 QA pairs are created per sample, but only 3k total QA pairs were kept? Did I miss some details on how this was filtered?\\n\\nPlease provide more details on the sampled data for the OOD experiments. This data should also be released for reproducibility.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
CI5Cj0vktS
Robust Barycenter Estimation using Semi-Unbalanced Neural Optimal Transport
[ "Milena Gazdieva", "Jaemoo Choi", "Alexander Kolesov", "Jaewoong Choi", "Petr Mokrov", "Alexander Korotin" ]
Aggregating data from multiple sources can be formalized as an *Optimal Transport* (OT) barycenter problem, which seeks to compute the average of probability distributions with respect to OT discrepancies. However, in real-world scenarios, the presence of outliers and noise in the data measures can significantly hinder the performance of traditional statistical methods for estimating OT barycenters. To address this issue, we propose a novel scalable approach for estimating the *robust* continuous barycenter, leveraging the dual formulation of the *(semi-)unbalanced* OT problem. To the best of our knowledge, this paper is the first attempt to develop an algorithm for robust barycenters under the continuous distribution setup. Our method is framed as a $\min$-$\max$ optimization problem and is adaptable to *general* cost functions. We rigorously establish the theoretical underpinnings of the proposed method and demonstrate its robustness to outliers and class imbalance through a number of illustrative experiments. Our source code is publicly available at https://github.com/milenagazdieva/U-NOTBarycenters.
[ "unbalanced optimal transport", "barycenter", "generative modeling" ]
Accept (Poster)
https://openreview.net/pdf?id=CI5Cj0vktS
https://openreview.net/forum?id=CI5Cj0vktS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zGuxsC4Aba", "tshUrYx6VD", "tfPTTl2Ozo", "oKv2vwuadU", "lGP1prHkXR", "jSBWfgbnyt", "h2LJxpbeNT", "gpqQOxlZNG", "g4QjnMuaeq", "boSjA2XhV1", "bl4Cy2VvzT", "QiKYPcKq1s", "Qe9VhKgYNi", "QLgVmDSnt4", "MvBzViSliX", "LVzaeBk9HF", "Ke1xWeLzPc", "Ji13YodqdH", "Ds9VYwvF35", "BJvTWnp6Kl", "B6lVxs5NY8", "8WuijZmfO3", "6hluFnTmYw", "6eiZVa7XkW", "697HToJkJN", "4eg3BLndAA", "3jggKD1Bsw", "3aPY41RbjQ", "1b97vuM7wU" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732407405953, 1733164021729, 1730020681800, 1733164082880, 1730608819166, 1732316055521, 1732316937112, 1732316366237, 1732316592211, 1732315395727, 1732312557397, 1732825075394, 1732317198251, 1732978345097, 1732313335991, 1732356906389, 1732557236069, 1730706550299, 1733226145351, 1734738399644, 1732826012402, 1732316309683, 1732313489987, 1737524103469, 1732825206951, 1730353523863, 1732825839655, 1732362218829, 1733210030254 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11104/Reviewer_hyxj" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Reviewer_niQu" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Reviewer_MXNH" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Reviewer_MXNH" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Reviewer_niQu" ], [ "ICLR.cc/2025/Conference/Submission11104/Reviewer_MXNH" ], [ "ICLR.cc/2025/Conference/Submission11104/Reviewer_nQfu" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Area_Chair_Q999" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Reviewer_hyxj" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Authors" ], [ "ICLR.cc/2025/Conference/Submission11104/Reviewer_nQfu" ] ], "structured_content_str": [ "{\"comment\": \"Dear the Authors,\\n\\nThanks for your response. However, I am still not convince by your answer to my concern about the convergence analysis of the algorithm. I think such analysis is very important to investigate the theoretical properties of the optimal solution, and to determine whether that algorithm is practical or not. Given this reason, I decided to keep my rating unchanged.\"}", "{\"comment\": \"Dear reviewer,\\n\\nAs the rebuttal phase deadline is approaching in a few hours, we would greatly appreciate your feedback on our responses to the reviews.\\n\\nThank you.\\n\\nBest regards, The Authors\"}", "{\"summary\": \"In this work, authors propose to solve the continuous unbalanced optimal transport barycenter. To do that, they derive a dual formulation to the problem, which is a min-max problem over potentials and conditional distributions. They solve the problem by parametrizing the potential and conditional distributions with neural networks. Finally, they show that their method works on three experiments, demonstrating that they recover the right barycenters, that the barycenter is robust to outliers and to class imbalance, and that it can handle using different costs.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is overall well written and nice to read.\\n\\nIt provides one of the first method to solve continuous UOT barycenters leveraging a dual formulation and neural networks parametrizations, which is an interesting contribution.\\n\\nSeveral convincing experiments are done demonstrating that the method learns well the UOT barycenter, that it is robust to outliers and class imbalance, and that it works with several costs.\", \"weaknesses\": \"The main weakness in my opinion is that the method feels really incremental compared to [1]. The main difference is that it it adapted to the UOT problem, which just changes slightly the formulation of the dual, and how to sample from the barycenter for inference.\\n\\nAnother weakness, which is classical with UOT, is that the choice of the unbalancedness parameters does not seem easy.\\n\\nThe method also needs to solve a min-max problem, which is probably unstable and very costly.\", \"questions\": \"A term seems to be missing in the definition of $\\\\psi$-divergence, see Definition 1 in [2].\\n\\nThe sentence line 227 is not clear to me. The $m$-congruence is a constraint of the problem. So I do not understand why it is written \\\"if the potentials satisfy the m-congruence, then the optimal value of the SUOT barycenter can be derived by solving (8)\\\". The SUOT barycenter can always be derived by solving (8) and the potentials necessarily satisfy this constraint when solving (8).\\n\\nIn Corollary 1, it is stated that the sup is taken over $f_{[1,K]}$. Isn't it also taken over $m$?\\n\\nIn Theorem 2, equation (12), shouldn't it be an argmin?\\n\\nIn Section 5.1, it is stated that the UOT problem is equivalent with the OT problem between rescaled distribution. Is it truly equivalent? And could we solve the OT barycenter between the rescaled distributions instead of solving the SUOT barycenter?\\n\\nIs it proved that $T_{1\\\\\\\\#}\\\\mathbb{P}_1$ gives the UOT barycenter?\\n\\nI think that the reference [3] is missing. In their experiments, they compute robust OT barycenters via unbalancedness.\", \"typos\": [\"Line 11: The first sentence of the abstract feels weird: I don't see what is the \\\"common challenge\\\"\", \"Line 107: \\\"This transition is valid since we work the infimum in weak\\\"\", \"Line 142: \\\"wights\\\"\", \"Line 365: what is $T$ in $T_1 = \\\\lambda_1 Id + \\\\lambda_2 T$\", \"Legend of Figure 4: $\\\\mathbb{P}_0$ -> $\\\\mathbb{P}_1$ and $\\\\mathbb{P}_1$ -> $\\\\mathbb{P}_2$\", \"[1] Kolesov, A., Mokrov, P., Udovichenko, I., Gazdieva, M., Pammer, G., Burnaev, E., & Korotin, A. (2024). Estimating Barycenters of Distributions with Neural Optimal Transport. arXiv preprint arXiv:2402.03828.\", \"[2] S\\u00e9journ\\u00e9, T., Peyr\\u00e9, G., & Vialard, F. X. (2023). Unbalanced optimal transport, from theory to numerics. Handbook of Numerical Analysis, 24, 407-471.\", \"[3] S\\u00e9journ\\u00e9, T., Bonet, C., Fatras, K., Nadjahi, K., & Courty, N. (2023). Unbalanced optimal transport meets sliced-Wasserstein. arXiv preprint arXiv:2306.07176.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer,\\n\\nAs the rebuttal phase deadline is approaching in a few hours, we would greatly appreciate your feedback on our responses to the reviews.\\n\\nThank you.\\n\\nBest regards, The Authors\"}", "{\"summary\": \"This paper proposes a scalable approach for estimating the robust continuous barycenter by using the dual formulation of the (semi-)unbalanced OT problem. They first attempt to develop an algorithm for robust barycenters under continuous distribution setup. They model this problem as a min-max optimization problem and provide theoretical underpinnings, along with experimental results on synthetic and real-world datasets to demonstrate the robustness and adaptability of their method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper introduces a continuous SUOT barycenter estimation method that addresses the issue of robustness against outliers and imbalances in real-world datasets.\", \"Rigorous derivation of the SUOT-based framework and solid theoretical support for the proposed model give the approach a solid foundation.\"], \"weaknesses\": \"Experiments are not sufficient and the baseline method is not comprehensive.\", \"questions\": [\"Given that the Wasserstein barycenter of Gaussian distributions has a closed-form solution, could the authors provide experimental verification for the Gaussian distribution case?\", \"Can the robustness parameter $\\\\tau$ be automatically optimized during training rather than manually tuned? If not, for different proportions or types of noise, different $tau$ is usually required. Will this limit the practicality of the scheme?\", \"How does the method work with high-dimensional data? In other words, the support size of the measures may be large in practical applications. While the support size of the measures in your experiments seems rather small.\", \"Why don't you compare the methods [1,2] based on robust OT? Please add corresponding experiments.\", \"[1] Nietert S, Goldfeld Z, Cummings R. Outlier-robust optimal transport: Duality, structure, and statistical analysis[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2022: 11691-11719.\", \"[2] Wang X, Huang J, Yang Q, et al. On Robust Wasserstein Barycenter: The Model and Algorithm[C]//Proceedings of the 2024 SIAM International Conference on Data Mining (SDM). Society for Industrial and Applied Mathematics, 2024: 235-243.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer MXNH (continuation)\", \"comment\": \"(Ending of the answer to question (3).)\\n\\nHowever, to further showcase the performance of our solver in the case of large data space dimensions, we conducted a **new experiment** for estimating the barycenter of images of young and elderly individuals from the FFHQ (Karras et al., 2019) dataset. Here we run our solver in the latent space of the pretrained StyleGAN2-ada (Karras et al., 2020) generator consisting of $18\\\\times 512$ vectors, i.e., having 18 times bigger dimension than we considered before. This experiment highlights the potential of our approach to manipulate images by learning barycenters between distinct image distributions, enabling controlled transitions across semantic attributes. \\n\\n*More detailed explanation regarding this **new experiment** is given in Appendix C.2 in the revised version of our paper.*\\n\\n\\n**(4) Why don't you compare the methods (Nietert et al., 2022, Wang et al., 2024) based on robust OT? Please add corresponding experiments.**\\n\\nThank you for your question. However, we humbly think that comparing with these methods is out of scope of our paper. To our understanding, the work (Nietert et. al., 2022) (we added the citation to the new revision) proposes a notion of robust Wasserstein distance (alternative to what we use). They do not indicate any way to use their distance in the context of barycenter problem, i.e., they do not propose any barycenter solver. Moreover, the proposed method does not learn the (robust) OT mapping itself - practically, their objective is a modification of WGAN loss (which operates only with discirminator or critic), it does not recover the desired mapping between the source and target distributions. Therefore, it is **impossible to compare** with them (even like in our experimental Section 5.1).\\n\\nAt the same time, exploring the possibility to adapt their robust Wasserstein distance for the barycenter problem is definitely an interesting point for future research, but it will require a separate research with its own theory and experimental validation. Regarding the work (Wang et. al., 2024) (mentined in Related works section 3) - they propose a *discrete* robust Wasserstein barycenter solver which falls out of our considered *continuous* computational setup, see our Section 2.3. However, for completeness, we tested the performance of the classic discrete approach (Cuturi \\\\& Doucet et al., 2014) for balanced OT barycenter computation in the **new experiment** on continuous barycenter estimation, see Appendix C.1 in the revised version of our paper. This experiment shows that even in the balanced barycenter case, the discrete approach provides poor approximation for the ground-truth continuous barycenter and the quality of this approximation decreases drastically with the increase of dimension.\\n\\n**Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score.\\n\\n**References.**\\n\\nJ. Choi, J. Choi, and M. Kang. Generative modeling through the semi-dual formulation of unbalanced optimal transport. In Advances in Neural Information Processing Systems, volume 36, 2023.\\n\\nJ. Choi, J. Choi, and M. Kang. Analyzing and Improving Optimal-Transport-based Adversarial Networks. International Conference on Learning Representations, 2024.\\n\\nMarco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In International conference on machine learning. PMLR, 2014\\n\\nNietert S, Goldfeld Z, Cummings R. Outlier-robust optimal transport: Duality, structure, and statistical analysis[C]. International Conference on Artificial Intelligence and Statistics. PMLR, 2022.\\n\\nWang X, Huang J, Yang Q, et al. On Robust Wasserstein Barycenter: The Model and Algorithm[C]. Proceedings of the 2024 SIAM International Conference on Data Mining (SDM), 2024\\n\\nPedro C \\u00c1lvarez-Esteban, E Del Barrio, JA Cuesta-Albertos, and C Matr\\u00e1n. A fixed-point approach to barycenters in wasserstein space. Journal of Mathematical Analysis and Applications, 2016.\\n\\nKolesov et. al., Estimating Barycenters of Distributions with Neural Optimal Transport, Proceedings of the 41st International Conference on Machine Learning, 2024a\\n\\nKolesov et. al., Energy-Guided Continuous Entropic Barycenter Estimation for General Costs, NeurIPS, 2024b\\n\\nTero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. CVPR, 2019\\n\\nKorotin, A., Li, L., Genevay, A., Solomon, J. M., Filippov, A., & Burnaev, E. (2021). Do neural optimal transport solvers work? a continuous wasserstein-2 benchmark. NeurIPS, 34.\\n\\n\\nNguyen, N. H., Le, D., Nguyen, H. P., Pham, T., & Ho, N. (2024). On Barycenter Computation: Semi-Unbalanced Optimal Transport-based Method on Gaussians. arXiv preprint arXiv:2410.08117.\"}", "{\"title\": \"Response to Reviewer niQu (continuation)\", \"comment\": \"**(3) The method also needs to solve a min-max problem, which is probably unstable and very costly.**\\n\\nWe agree that the reliance on adversarial training (with all its pitfalls) is indeed a limitation of our method (we mention this in Section 6). However, we want to underline the methodological/computational complexity of the problem we solve in our paper (OT barycenter $-$ and even its more tricky *unbalanced* version). In fact, this complexity is proclaimed by the non-trivial (and far from being easily implemented and made to work) methodological choices of other existing barycenter solvers. In particular, (Kolesov et. al., 2024, Estimating) and (Korotin et. al., 2022) rely on adversarial training similar to us; (Kolesov et. al., 2024, Energy) use iterative Langevin sampling at the training and inference stages which is costly. To the best of our knowledge, the earlier OT barycenter approaches (see our Related works - Section 3) either also rely on some rather complex techniques, or consider too narrow OT barycenter formulations (e.g., Wasserstein-2 barycenters). Note that the latter makes applications similar to Color-Shape experiment from our paper (Section 5.3) impossible.\\n\\n**(4) The sentence line 227 is not clear to me. <...>**\\n\\nThanks for pointing to this sentence, we updated it in the revised version of our paper.\\n\\n**(5) A term seems to be missing in the definition of $\\\\psi$-divergence, see Definition 1 in (S\\u00e9journ\\u00e9 et al., 2023b).**\\n\\nThe \\\"term\\\" which you are referring to disappears in the case when $\\\\psi$-divergence is defined between the measures $\\\\mu\\\\_1$, $\\\\mu\\\\_2$ s.t. $\\\\mu\\\\_1\\\\ll\\\\mu\\\\_2$. In our paper, we deal with measures which satisfy this property $-$ thus, we decided to not overload the text by introducing this \\\"term\\\". However, to make this aspect more rigorous, we fix the definition of $\\\\psi$-divergence in the revised version of our paper by specifying that ${\\\\mathcal{D}\\\\_{\\\\psi}(\\\\mu\\\\_1\\\\|\\\\mu\\\\_2)= \\\\int_{\\\\mathcal{X}} \\\\psi\\\\bigg(\\\\frac{\\\\mu\\\\_{1}(x)}{\\\\mu\\\\_{2}(x)}\\\\bigg)d\\\\mu\\\\_{2}(x)}$ only if $\\\\mu\\\\_{1}\\\\ll \\\\mu\\\\_2$ and $+\\\\infty$ otherwise.\\n\\n**(6) In Corollary 1, it is stated that the sup is taken over $f_k$. Isn't it also taken over $m$?**\\n\\nYes, it is written in the Eq. (9) of Corollary 1 but accidentally is not included in the corresponding caption. Thanks for noting, we added it in the revised version of our paper.\\n\\n**(7) In Theorem 2, equation (12), shouldn't it be an argmin?**\\n\\nSince the infimum in equation (12) is attained at least for one map $\\\\gamma\\\\_k^*(\\\\cdot|x\\\\_k)$, the $\\\\arg\\\\inf\\\\_{\\\\gamma\\\\_k(\\\\cdot|x\\\\_k))}$ can be certainly replaced by $\\\\arg\\\\min_{\\\\gamma\\\\_k(\\\\cdot|x\\\\_k))}$. We updated Equation 12 accordingly.\\n\\n**(8) In Section 5.1, it is stated that the UOT problem is equivalent with the OT problem between rescaled distribution. Is it truly equivalent? And could we solve the OT barycenter between the rescaled distributions instead of solving the SUOT barycenter?**\\n\\nThe equivalence between the solutions of the UOT problem and OT problem between the rescaled marginals is shown in (Choi et al., 2024), see their Theorem 3.3. We also include this theorem in our Appendix A.2 since we used it in the proof of our Theorem 2. Specifically, we exploited in that proof the *equivalence* between the unbalanced OT barycenter problem and its balanced counterpart. To ease the explanations, we recall this connection below.\\n\\nIn principle, we can reformulate the semi-unbalanced OT barycenter problem as a balanced OT barycenter one:\\n$$\\n\\\\inf\\\\_{\\\\mathbb{Q}\\\\in\\\\mathcal{P}(\\\\mathcal{Y})} \\\\mathcal{B}\\\\_{u}(\\\\mathbb{Q})=\\\\inf_{\\\\mathbb{Q}\\\\in\\\\mathcal{P}(\\\\mathcal{Y})}\\\\sum\\\\_{k=1}^K \\\\lambda\\\\_k \\\\text{SUOT}\\\\_{c_k,\\\\psi_k}(\\\\mathbb{P}\\\\_k, \\\\mathbb{Q})=\\\\inf\\\\_{\\\\mathbb{Q}\\\\in\\\\mathcal{P}(\\\\mathcal{Y})}\\\\sum_{k=1}^K \\\\lambda_k \\\\text{OT}(\\\\widetilde{\\\\mathbb{P}}\\\\_k, \\\\mathbb{Q})=\\n$$\\n$$\\n\\\\inf\\\\_{\\\\mathbb{Q}\\\\in\\\\mathcal{P}(\\\\mathcal{Y})}\\\\sum\\\\_{k=1}^K \\\\lambda_k\\\\Bigg\\\\lbrace\\\\sup\\\\_{f_k} \\\\int\\\\_{\\\\mathcal{X}\\\\_k} f_k^c(x\\\\_k) d \\\\widetilde{\\\\mathbb{P}}\\\\_k(x\\\\_k) + \\\\int\\\\_{\\\\mathcal{Y}} f\\\\_k(y) d \\\\mathbb{Q}(y)\\\\Bigg\\\\rbrace.\\n$$\\nHere the distributions $\\\\widetilde{\\\\mathbb{P}}\\\\_k$ ($k\\\\in\\\\overline{K}$) are specified via the optimal potential $f^*\\\\_k$ delivering maximum to the inner $\\\\sup$ problem: $d\\\\widetilde{\\\\mathbb{P}}_k(x_k)=\\\\nabla \\\\overline{\\\\psi}(-(f^*_k)^c(x_k))d\\\\mathbb{P}_k(x_k)$. This is the main **cornerstone of the balanced reformulation** $-$ prior to solving the balanced OT barycenter problem, we need to identify the re-scaled input measures, i.e., compute the re-scaling factors which can be identified only when the solutions of this problem (optimal potentials) are already given. *Thus, solving this balanced OT barycenter problem between the re-scaled measures seems to be an ambiguous task.*\"}", "{\"title\": \"Response to Reviewer hyxj (references)\", \"comment\": \"**References.**\\n\\nJiaojiao Fan, Amirhossein Taghvaei, and Yongxin Chen. Scalable computations of wasserstein\\nbarycenter via input convex neural networks. In Marina Meila and Tong Zhang (eds.), Proceedings\\nof the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine\\nLearning Research\\n\\nKorotin, A., Selikhanovych, D., \\\\& Burnaev, E. Kernel Neural Optimal Transport. In The Eleventh International Conference on Learning Representations, 2023\\n\\nKolesov et. al., Energy-Guided Continuous Entropic Barycenter Estimation for General Costs, NeurIPS, 2024.\\n\\nMakkuva, A., Taghvaei, A., Oh, S., \\\\& Lee, J. (2020, November). Optimal transport mapping via input convex neural networks. In International Conference on Machine Learning (pp. 6672-6681). PMLR.\\n\\nKorotin, A., Li, L., Genevay, A., Solomon, J. M., Filippov, A., \\\\& Burnaev, E. (2021). Do neural optimal transport solvers work? a continuous wasserstein-2 benchmark. Advances in neural information processing systems, 34, 14593-14605.\\n\\nGushchin, N., Kolesov, A., Korotin, A., Vetrov, D. P., \\\\& Burnaev, E. (2024). Entropic neural optimal transport via diffusion processes. Advances in Neural Information Processing Systems, 36.\\n\\nChoi, J., Choi, J., and Kang, M.. Generative modeling through the semi-dual formulation of unbalanced optimal transport. In Advances in Neural Information Processing Systems, volume 36, 2023.\\n\\nKolesov et. al., Estimating Barycenters of Distributions with Neural Optimal Transport, Proceedings of the 41st International Conference on Machine Learning, PMLR 235:25016-25041, 2024\"}", "{\"title\": \"Response to Reviewer niQu\", \"comment\": \"Thank you for your detailed feedback. Please find the answers to your questions below.\\n\\n**(1) The main weakness in my opinion is that the method feels really incremental compared to (Kolesov et al., 2024, Estimating). The main difference is that it is adapted to the UOT problem, which just changes slightly the formulation of the dual, and how to sample from the barycenter for inference.**\\n\\nWe would like to clarify the novelty of our paper.\\nIt would be most correct to position our method as a generalization of the recent SOTA approach to estimating continuous **balanced** OT barycenters - **NOTB** (Kolesov et al., 2024, Estimating), to the case of **semi-unbalanced OT** barycenters. However, this generalization is not straightforward as it is based on different principles.\\n\\n*First*, the congruence condition on the potentials, which is used in the NOTB approach, ceases to be true in the case of semi-unbalanced OT. In our paper, we developed a completely different condition on the potentials, which required the usage of an additional parameter $m$ in the optimization algorithm.\\n\\n*Second*, the procedure of sampling from the barycenter during the inference has major differences from the one used in NOTB. In the semi-unbalanced case, to get samples from the barycenter, we first need to define the procedure of sampling from the left marginals of the plans. Defining such a sampling procedure is tricky even for the continuous unbalanced OT (UOT) solver by its own. As far as we know, only one of the existing continuous UOT solvers (Gazdieva et al., 2024) propose such a strategy using the **restricted** Gaussian mixture parametrization of potentials. Other scalable and theoretically justified continuous UOT solvers (Choi et al., 2024, Yang et al., 2018) do not offer such a precise sampling procedure and consider only sampling from the input measure. In contrast to previous approaches, we suggest a precise sampling procedure that utilizes equation (11) from our paper. \\n**Using the learned potentials, our solver allows for precise sampling of barycenter from the left marginals of learned plans without any restrictions on the underlying potentials.** \\nIn the case of previous works in the field such as NOTB, the situation is much easier since these marginals coincide with input measures and sampling is defined a priori. \\n\\n*Third*, during the rebuttal period, we conducted **a novel high-dimensional experiment** to further demonstrate the practical advantages of our method. Specifically, as detailed in Appendix C.3, we performed interpolation between two image distributions within the FFHQ 256\\u00d7256 dataset, focusing on transitioning from the *young* to *elderly* categories. This experiment highlights the potential of our approach to manipulate images by learning barycenters between distinct image distributions, enabling controlled transitions across semantic attributes. Moreover, our model also demonstrates robustness to class imbalancedness in this practical task compared to other baselines. \\n\\n*More detailed explanation regarding this experiment is given in Appendix C.3 in the revised version of our paper.*\\n\\n**(2) Another weakness, which is classical with UOT, is that the choice of the unbalancedness parameters does not seem easy.**\\n\\nIndeed, the choice of unbalancedness parameter is a tricky point of *any method* related to UOT problem. In our paper, we followed recent works in Unbalanced Optimal Transport (UOT), such as (J. Choi et al., 2023, J. Choi et al., 2024), and adopted a manual selection of the unbalancedness parameter $\\\\tau$. Still, we believe that this flexibility in choosing $\\\\tau$ can be interpreted as an advantage because $\\\\tau$ can be tailored to suit the specific properties of the distributions, e.g., control the amount of samples treated as outliers. However, we also believe that developing a method to automatically determine $\\\\tau$ based on the proportion of outliers or class imbalance would be a highly interesting direction for future research.\"}", "{\"title\": \"Response to Reviewer MXNH\", \"comment\": \"Thank you for your detailed feedback. Please find the answers to your questions below.\\n\\n**(1) Given that the Wasserstein barycenter of Gaussian distributions has a closed-form solution, could the authors provide experimental verification for the Gaussian distribution case?**\\n\\nThank you for this valuable suggestion. In the revised version of our paper, we test our solver in the balanced/semi-unbalanced OT barycenter problem for Gaussian distributions with computable *ground-truth solutions*.\\n\\nIn the balanced OT barycenter problem for Gaussian distributions, the ground-truth barycenter is known to be Gaussian and can be estimated using the fixed point iteration procedure (\\u00c1lvarez-Esteban et al., 2016). We tested our solver in this balanced setup keeping in mind that for large unbalancedness parameters $\\\\tau$, it should provide a good approximation of balanced OT barycenter problem solutions. **Ultimately, we show that while our solver is designed to tackle an *unbalanced* OT barycenter problem, its performance in the different balanced OT barycenter problem is comparable to the current SOTA solvers.**\\n\\nA recent preprint (Nguyen et al., 2024) provides an iteration procedure for calculating unbalanced SUOT barycenter for Gaussian distributions using the quadratic cost and KL divergences. **We test our solver in this *unbalanced* setup for different parameters $\\\\tau$ and show that it consistently outperforms the SOTA balanced solver (Kolesov et al., 2024a).**\\n\\n*We include these new requested experiments in Appendices C.1, C.2, of the revised version of our paper.*\\n\\n**(2) Can the robustness parameter $\\\\tau$ be automatically optimized during training rather than manually tuned? If not, for different proportions or types of noise, different $\\\\tau$ is usually required. Will this limit the practicality of the scheme?**\\n\\nWe appreciate the reviewer for highlighting an important aspect of our approach. Following recent works in Unbalanced Optimal Transport (UOT), such as (J. Choi et al., 2023, J. Choi et al., 2024), our model adopts a manual selection of the robustness parameter $\\\\tau$. Meanwhile, we believe that this flexibility in choosing $\\\\tau$ can be interpreted as an advantage because $\\\\tau$ can be tailored to suit the specific properties of the distributions, e.g., control the amount of samples treated as outliers. However, we also believe that developing a method to automatically determine $\\\\tau$ based on the proportion of outliers or class imbalance would be a highly interesting direction for future research.\\n\\n**(3) How does the method work with high-dimensional data? In other words, the support size of the measures may be large in practical applications. While the support size of the measures in your experiments seems rather small.**\\n\\nWe are not quite sure what do you exactly mean by \\\"*support*\\\" in this question because this notion is usually treated differently by researchers from discrete and continuous OT field. In the case of discrete OT, \\\"*support*\\\" usually corresponds to the size of the dataset. In the case of continuous OT, it is treated as an ambient or intrinsic data dimension. Thus, we provide the answers for both sides of this question.\\n\\n*Dataset size.* The sizes of the datasets do not matter for our algorithm. It uses the stochastic optimization and can handle arbitrarily large datasets.\\n\\n*Data space dimension.* Searching for the barycenters directly in the data space may not be very meaningful because, as a result of averaging, some practically meaningless objects may appear. For example, averaging images of '0' and '1' with $\\\\ell_2$ cost directly in the image space will result in straightforward $\\\\ell_2$ interpolation of these images. Thus, dealing with a barycenter problem, it is necessary to restrict the space where we search for this barycenter. For example, this space can be specified using the pretrained generative models, e.g., StyleGAN (Karras et al., 2019), as it was done in the previous related papers (Kolesov et al., 2024a,b) and our experiment with MNIST dataset, see our Section 5.3. Here the data space dimension usually means an intrinsic dimension, i.e., the dimension of the manifold where this data lies. In the context of generative models, this intrinsic dimension is usually treated as a dimension of the model's latent space. Our experiments in Section 5.3 are conducted in standard StyleGAN latent space of the dimension 512.\\n\\n(See the next comment for continuation of this answer.)\"}", "{\"title\": \"General response\", \"comment\": \"Dear reviewers,\\n\\nthank you for your thoughtful reviews! We appreciate that you positively highlight our theoretical insights (Reviewers nQfu, MXHN), proper practical validation (Reviewers nQfu, niQu) and overall quality of the text (Reviewer hyxj).\\n\\nWe have uploaded an updated version of the paper. The newly added content is highlighted with the **blue** color. **The changes include**:\\n\\n- [nQfu] New Table with training/inference times of our solver and baselines in **Appendix B.1**;\\n- [MXNH] New **Appendicies C.1/C.2** showing the performance of our solver and baselines in OT/SUOT barycenter problem for Gaussian distributions with known ground-truth solutions. The experiments reveal the poor performance of classic discrete method (Cuturi & Doucet, 2014) in continuous OT barycenter problem;\\n- [MXHN] New **Appendix C.3** with high-dimensional experiment demonstrating the practical advantages of our solver;\\n- [MXHN, niQu] Added references to papers (Nietert et al., 2022), (Wang et al., 2024), (S\\u00e9journ\\u00e9 et al., 2023), see **Section 3** and **Section 6**;\\n- [niQu] Fixed the unclear aspects/typos in the **main text**.\\n\\n**References.**\\n\\nMarco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In International\\nconference on machine learning, pp. 685\\u2013693. PMLR, 2014\\n\\nNietert S, Goldfeld Z, Cummings R. Outlier-robust optimal transport: Duality, structure, and statistical analysis[C]. International Conference on Artificial Intelligence and Statistics. PMLR, 2022: 11691-11719.\\n\\nWang X, Huang J, Yang Q, et al. On Robust Wasserstein Barycenter: The Model and Algorithm[C]. Proceedings of the 2024 SIAM International Conference on Data Mining (SDM). Society for Industrial and Applied Mathematics, 2024: 235-243\\n\\nS\\u00e9journ\\u00e9, T., Bonet, C., Fatras, K., Nadjahi, K., \\\\& Courty, N. (2023). Unbalanced optimal transport meets sliced-Wasserstein. arXiv preprint arXiv:2306.07176.\"}", "{\"title\": \"New theoretical result\", \"comment\": \"Dear Reviewers,\\n\\nas per the request of the Reviewer hyxj, we have prepared an additional revision of our paper which includes a **new theoretical result** (Theorem 3 in Section 4) establishing the **quality bounds** on the recovered plans based on the duality gaps, i.e., the errors for solving inner and outer optimization problems in our objective (9). Theorem 3 shows that when our Algorithm 1 optimizing this $\\\\max$-$\\\\min$ objective converged nearly to the optimum, its solutions are close to the true conditional plans.\\n\\nBest regards,\\nthe Authors\"}", "{\"title\": \"Response to Reviewer niQu (continuation #2)\", \"comment\": \"**(9) Is it proved that $T\\\\_1\\\\sharp\\\\mathbb{P}\\\\_1$ gives the UOT barycenter?**\\n\\nAs shown in Theorem 2, the optimal plan $T^\\\\star\\\\_1$ lies in the saddle point solution of our max-min optimization problem (equation 12). Moreover, as shown in equation 11, $\\\\\\\\mathbb{Q} = T^{\\\\\\\\star}_{1}\\\\\\\\# \\\\widetilde{\\\\\\\\mathbb{P}}_1$, where $\\\\widetilde{\\\\mathbb{P}}\\\\_1 = \\\\nabla \\\\bar{\\\\psi} (-(f^\\\\star\\\\_1)^c (x) ) \\\\mathbb{P}\\\\_1$.\\nThus, to obtain a point of the barycenter, we first sample $x\\\\sim \\\\widetilde{\\\\mathbb{P}}$ by rejection sampling (line 327-338). Then, we pass $x$ through our learned transport map $T_1$. *Could you please reply, does this answer your question?*\\n\\n**(10) I think that the reference (S\\u00e9journ\\u00e9 et al., 2023a) is missing. In their experiments, they compute robust OT barycenters via unbalancedness.**\\n\\nThank you for suggesting the relevant paper. We added the reference in Related Work section of the revised paper. Based on our understanding of the paper, we attribute it as the discrete method. \\n\\n**(11) Typos.**\\n\\n\\nThank you for your carefullness, we have fixed all the typos which you have mentioned.\\n\\n\\n**Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score.\\n\\n**References.**\\n\\nJ. Choi, J. Choi, and M. Kang. Generative modeling through the semi-dual formulation of unbalanced optimal transport. In Advances in Neural Information Processing Systems, volume 36, 2024.\\n\\nM. Gazdieva, A. Asadulaev, E. Burnaev, and A. Korotin. Light Unbalanced Optimal Transport. In Advances in Neural Information Processing Systems, volume 36, 2024.\\n\\nKolesov et. al., Estimating Barycenters of Distributions with Neural Optimal Transport, Proceedings of the 41st International Conference on Machine Learning, PMLR 235:25016-25041, 2024. \\n\\nS\\u00e9journ\\u00e9, T., Bonet, C., Fatras, K., Nadjahi, K., \\\\& Courty, N. (2023a). Unbalanced optimal transport meets sliced-Wasserstein. arXiv preprint arXiv:2306.07176.\\n\\nS\\u00e9journ\\u00e9, T., Peyr\\u00e9, G., \\\\& Vialard, F. X. (2023b). Unbalanced optimal transport, from theory to numerics. Handbook of Numerical Analysis, 24, 407-471.\\n\\nK. D. Yang and C. Uhler. Scalable unbalanced optimal transport using generative adversarial networks. In International Conference on Learning Representations, 2018.\\n\\nKorotin et al. Wasserstein iterative networks for barycenter estimation. Advances in Neural Information Processing Systems 35, 15672-15686, 2022.\\n\\nKolesov et. al., Energy-Guided Continuous Entropic Barycenter Estimation for General Costs, NeurIPS, 2024.\"}", "{\"comment\": \"Thank you for the reminder. There might have been an issue with the network at that time, which caused the update to fail. The update has now been successfully completed. I apologize for any inconvenience caused.\"}", "{\"title\": \"Response to Reviewer nQfu\", \"comment\": \"Thank you for your detailed feedback. Please find the answers to your questions below.\\n\\n**(1) My main concern is the novelty of the paper. <...> if the authors can demonstrate unique contributions that go beyond a straightforward combination of these methods.**\\n\\nWe would like to clarify the novelty of our paper. It would be most correct to position our method as a generalization of the recent SOTA approach to estimating continuous **balanced** OT barycenters - **NOTB** (Kolesov et al., 2024, Estimating), to the case of **semi-unbalanced OT** barycenters. However, this generalization is not straightforward as it is based on different principles.\\n\\n*First*, the congruence condition on the potentials, which is used in the NOTB approach, ceases to be true in the case of semi-unbalanced OT. In our paper, we developed a completely different condition on the potentials, which required the usage of an additional parameter $m$ in the optimization algorithm.\\n\\n*Second*, the procedure of sampling from the barycenter during the inference has major differences from the one used in NOTB. In the semi-unbalanced case, to get samples from the barycenter, we first need to define the procedure of sampling from the left marginals of the plans. Defining such a sampling procedure is tricky even for the continuous unbalanced OT (UOT) solver by its own. As far as we know, only one of the existing continuous UOT solvers (Gazdieva et al., 2024) propose such a strategy using the **restricted** Gaussian mixture parametrization of potentials. Other scalable and theoretically justified continuous UOT solvers (Choi et al., 2024, Yang et al., 2018) do not offer such a precise sampling procedure and consider only sampling from the input measure. In contrast to previous approaches, we suggest a precise sampling procedure that utilizes equation (11) from our paper. \\n**Using the learned potentials, our solver allows for precise sampling of barycenter from the left marginals of learned plans without any restrictions on the underlying potentials.** \\nIn the case of previous works in the field such as NOTB, the situation is much easier since these marginals coincide with input measures and sampling is defined a priori. \\n\\n*Third*, during the rebuttal period, we conducted **a novel high-dimensional experiment** to further demonstrate the practical advantages of our method. Specifically, as detailed in Appendix C.3, we performed interpolation between two image distributions within the FFHQ 256\\u00d7256 dataset, focusing on transitioning from the *young* to *elderly* categories. This experiment highlights the potential of our approach to manipulate images by learning barycenters between distinct image distributions, enabling controlled transitions across semantic attributes. Moreover, our model also demonstrates robustness to class imbalancedness in this practical task compared to other baselines. \\n\\n*More detailed explanation regarding this experiment is given in Appendix C.3 in the revised version of our paper.*\\n\\n**(2) How does the time efficiency of U-NOTB compare with other baselines?** \\n\\nIn the revised manuscript, we reported the training/inference time of our method, see **new Tables 3,4** in Appendix B.1. As shown in Table 3, the time efficiency of U-NOTB is comparable to SOTA balanced solver NOTB in terms of training time. In terms of inference time, U-NOTB is slower, taking 2 to 10 times longer than NOTB. This gap is due to the additional computational complexity introduced by rejection sampling, which requires calculating the $c$-transform of the potential function $\\\\hat{f}_k$. This process involves a forward pass through the learned potential function. However, we would like to highlight that thanks to our proposed sampling method, our solver gains *robustness to outliers* and *class imbalance*. A detailed discussion of rejection sampling can be found in lines 333\\u2013338 of our manuscript.\\n\\n**(3) The experiments seem to lack details about the size of the training/testing sets and some training specifics, such as learning rate and other hyperparameters.** \\n\\nThe implementation details for our solver are given in Appendix B. Specifically, it includes Table 2 with all types of hyperparameters which we used in the experiments. We refer to this Appendix section in the main text of our paper, see lines 346-347. However, we would be happy to include the additional details in a revised version of our paper if you can point out which information is missing. *Could you please clarify which extra details should be included in our paper?*\"}", "{\"comment\": \"Thank you for your answer and for revising the paper.\\n\\nI have a last question about the method, and also the equivalence between OT and UOT barycenters. Do I understand correcty that the method is only developed to work to handle probability distributions, and would not work with arbitrary positive measures? The same questions holds for the equivalence between OT and UOT barycenters?\\n\\nOtherwise, my concerns and the main weaknesses have been addressed. In particular, the originality of the paper has been clarified. I will update my score to 8.\"}", "{\"comment\": \"Thank you for your detailed answer. I will raise your score.\"}", "{\"summary\": \"This paper proposes a neural network-based method to estimate continuous barycenter via the dual formulation of the semi-unbalanced OT.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The proposed method may be the first continuous robust barycenter estimation approach with proper theoretical support and practical validation.\\n\\n2. The proposed method is robust and can estimate the barycenter based on the data containing the outliers.\", \"weaknesses\": \"My main concern is the novelty of the paper. The primary method seems to be a combination of barycenter calculation, neural optimal transport, and semi-unbalanced optimal transport. In my view, simply combining methods may not be sufficient for publication at ICLR, so I give a negative score. I will consider raising my score if the authors can demonstrate unique contributions that go beyond a straightforward combination of these methods.\", \"questions\": \"Q1. How does the time efficiency of U-NOTB compare with other baselines?\\n\\nQ2. The experiments seem to lack details about the size of the training/testing sets and some training specifics, such as learning rate and other hyperparameters.\\n\\nQ3. Since the overall training is similar to GANs with a max-min approach, is the training of U-NOTB stable?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer,\\n\\nThe time of discussion between authors and reviewers is coming to an end. We have made our best to address you concerns about theoretical properties of the solutions of our algorithm. In our new Theorem 3, we establish the conditions under which the plans estimated by our approach are close to the true UEOT ones which justifies the practical usability of our approach. This proof is far from being trivial and we will be happy to address any of your questions. \\n\\nWe kindly ask you to give us the feedback regarding this new result.\"}", "{\"metareview\": \"In the paper, the authors proposed a new approach to estimate the (semi)-unbalanced barycenter of continuous distributions. It is done via formulated the problem as a min-max optimization problem and the method can be used for any general cost function. All the reviewers agree that the proposed method is novel and the theoretical results are thorough. During the rebuttal period, the authors also included additional theoretical results on the quality bounds on the recovered plans based on the duality gaps, which strengthens the theoretical guarantee for the Algorithm 1 in the paper.\\n\\nWhile there are some concerns about the experiments that are not extensive as well as the method may be slightly incremental compared to the previous works, I believe that the work has sufficient novelty and merit for ICLR. Therefore, I recommend accepting the paper. \\n\\nThe authors are encouraged to incorporate the suggestions of the reviewers into the revision of their manuscript.\", \"additional_comments_on_reviewer_discussion\": \"Please refer to the meta-review.\"}", "{\"title\": \"Additional comment for Reviewer hyxj\", \"comment\": \"Dear Reviewer,\\n\\nas per your request, we have included in the revised version of our paper a new theoretical result (Theorem 3 in Section 4) showing the quality bounds for the recovered plans according to the duality gaps, i.e., the errors for solving inner and outer optimization problems in our objective (9). Thanks to this result, we deduce that when our Algorithm 1 optimizing (9) converged nearly to the optimum, its solutions are close to the true conditional plans.\\n\\nWe will appreciate if you could take into account this requested result when finalizing your score.\\n\\nBest regards,\\nthe Authors\"}", "{\"title\": \"Response to Reviewer hyxj\", \"comment\": \"Thank you for your detailed feedback. Please find the answers to your questions below.\\n\\n**(1) The necessary condition seems to be not difficult to obtain, which makes it not convincing enough. So, would it be possible to obtain some sufficient conditions?**\\n\\nWe understood the term \\\"sufficient condition\\\" as follows:\\nfor every optimal saddle point $\\\\{(f\\\\_k^*, \\\\gamma\\\\_k^*)\\\\}\\\\_{k=1}^K$ of our optimization objective (9), it holds that $\\\\{\\\\gamma\\\\_k^*\\\\}\\\\_{k=1}^K$ is the family of true SUOT plans between $\\\\mathbb{P}\\\\_{[1:K]}$ and $\\\\mathbb{Q}$. \\n\\nActually, the latter seems to be true under some additional assumptions related to the *strong convexity* of $c_k(x_k,y)-f_{k}(y)$, ($k\\\\in[1,K]$). These assumptions were already established and studied in the previous works on balanced OT, see (Fan et al., 2021) or (Makkuva et al., 2020), and balanced OT barycenter problem, see (Kolesov et al., 2024, Estimating). In principle, we think that one can make the same kind of assumptions and presumably derive analogous bounds for our semi-unbalanced barycenter setup. However, these assumptions are far from practice since they usually require the usage of restricted class of neural network architectures, i,e., Input Convex Neural Networks (ICNNs). At the same time, the methods exploiting this type of networks are known to provide quite fair results in the tasks of generative modeling, see, e.g., Fig. 4 of (Korotin et al., 2021) for visualization of their performance. \\n\\nAnother way for ensuring the \\\"sufficient condition\\\" consists in considering more general OT formulations, e.g., using different regularizations such as entropic (Gushchin et al., 2024) or kernel (Korotin et al., 2023). We believe that our ideas for basic OT presented in the paper can be generalized to such cases, but leave the investigation of this aspect for future work.\\n\\n**(2) It would be better if the authors can highlight the advantages of finding the optimal plan over the optimal solution.**\", \"we_interpreted_this_question_as_asking\": \"Why do we focus on finding an optimal plan instead of directly computing the barycenter distribution? What are the practical advantages of this approach?\\n\\nThere exist a lot of practical tasks which require finding a shared representation of the data which comes from different sources. This task can be positioned as the problem of finding the barycenter of distributions. Then, in order to translate new data from each of the input distributions to the shared representation (barycenter), the practitioner needs to have access to the corresponding translation maps (conditional plans). It explains the importance of finding the optimal plans and not only barycenters by their own. More details on the specific applications where such kind of tasks appear, e.g., finding shared representatons for scans from different MRI scanners or mixing geological simulators, can be found in (Appendix B.2, Kolesov et al., 2024).\\n\\n\\n**(3) As stated in the limitations, the authors did not provide a rigorous analysis for the convergence of the algorithm.**\\n\\nDeriving theoretical results regarding the convergence of the obtained algorithm for estimating continuous unbalanced OT barycenters is a difficult and even might be not a solvable task under the reasonable conditions. Indeed, even existing results of this kind for the UOT problem itself seem to require restrictive assumptions which are not feasible in practice. For example, (Choi et al., 2023) in their Theorem 3.4 assume strong convexity of the potentials, which is not applicable in practice, as we explained in the first answer to you.\\n\\n**Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score.\"}", "{\"title\": \"Response to Reviewer nQfu (continuation)\", \"comment\": \"**(4) Since the overall training is similar to GANs with a max-min approach, is the training of U-NOTB stable?**\\n\\nThanks for raising this point. Indeed, the instability of training \\nis a well-known issue of adversarial optimization objectives. Our approach is not an exception and we noted this aspect in Discussion section of our paper, see lines 534-537. Still, we emphasize that the majority of the existing approaches for barycenter computation suffer from some kind of computational issues. Among the recent approaches: (Kolesov et. al., 2024, Estimating), (Korotin et. al., 2022) also resort to min-max optimization; (Kolesov et. al., 2024, Energy) utilizes Energy-based training with Langevin simulation which is time costly and may be unstable under improper hyperparameters setting (e.g., too large Langevin step size, too small number of Langevin steps, etc.). Among the other approaches: while there may be some that do not present serious computational challenges, this comes at the cost of limited applicability (e.g., reliance exclusively to $\\\\ell_2$ OT) and scalability, see Table 1 in (Kolesov et. al., 2024, Energy).\\n\\n\\n**Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score.\\n\\n**References.**\\n\\nJ. Choi, J. Choi, and M. Kang. Generative modeling through the semi-dual formulation of unbalanced optimal transport. In Advances in Neural Information Processing Systems, volume 36, 2024.\\n\\nM. Gazdieva, A. Asadulaev, E. Burnaev, and A. Korotin. Light Unbalanced Optimal Transport. In Advances in Neural Information Processing Systems, volume 36, 2024.\\n\\nKolesov et. al., Estimating Barycenters of Distributions with Neural Optimal Transport, Proceedings of the 41st International Conference on Machine Learning, PMLR 235:25016-25041, 2024. \\n\\nKorotin et al. Wasserstein iterative networks for barycenter estimation. Advances in Neural Information Processing Systems 35, 15672-15686, 2022.\\n\\nKolesov et. al., Energy-Guided Continuous Entropic Barycenter Estimation for General Costs, NeurIPS, 2024.\\n\\nK. D. Yang and C. Uhler. Scalable unbalanced optimal transport using generative adversarial networks. In International Conference on Learning Representations, 2018.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Additional comment for Reviewer nQfu\", \"comment\": \"Dear Reviewer,\\n\\nThe deadline for the rebuttal phase is fast approaching. We would be grateful if you could provide us with your feedback on our responses to the reviews. We are happy to address any additional points during the remaining period.\\n\\nBest regards,\\nthe Authors\"}", "{\"summary\": \"In this paper, the authors investigate the problem of finding the barycenter with semi-unbalance optimal cost (SUOT) from perspective of dual theory. In particular:\\n\\n(1) They derive the theory for the dual form of SUOT barycenter with reformulation, and necessary condition based on marginal plan and conditional plan\\n\\n(2) Then, from the necessary condition, they propose the an algorithm (Algorithm 1) to approximate the optimal transport plan from the solution. This algorithm is based on training the neural network of optimal plan. \\n\\n(3) Then, they empirically validate the efficacy of the algorithm on both synthetic and real dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides an important and novel dual theory for the SUOT barycenter problem.\\n\\n2. The proposed algorthm works well and persistent to the imbalance and outliers. \\n\\n3. The paper is well-structured and easy to follow.\", \"weaknesses\": \"1. The necessary condition seems to be not difficult to obtain, which makes it not convincing enough. So, would it be possible to obtain some sufficient conditions?\\n\\n2. It would be better if the authors can highlight the advantages of finding the optimal plan over the optimal solution. \\n\\n3. As stated in the limitations, the authors did not provide a rigorous analysis for the convergence of the algorithm.\", \"questions\": \"See Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Additional comment for Reviewer MXNH\", \"comment\": \"Dear Reviewer,\\n\\nwe appreciate your positive feedback on our answers. Please note that we have included in the revised version of our paper an additional theoretical result showing the quality bounds for the recovered plans (see our new [comment](https://openreview.net/forum?id=CI5Cj0vktS&noteId=QiKYPcKq1s) for all reviewers). \\n\\nIn the previous message you mentioned that you plan to update the score. However, we see that you have not updated it yet. Are there any other questions that you would like to ask us before finalizing your score?\\n\\nBest regards,\\nthe Authors\"}", "{\"title\": \"Response to Reviewer niQu\", \"comment\": \"Yes, the method is specifically developed for probability distributions. We believe that extending our method to arbitrary positive distributions represents a promising direction for future work.\\n\\nMoreover, to establish the connection between OT and SUOT barycenters in the case of probability measures we employ the connection between OT and UOT problems established in (Choi et al., 2024). Since the latter result holds true for the case of probability measures, the connection between OT and SUOT barycenters in the case of arbitrary positive measures also needs further investigations.\\n\\nWe are delighted to hear that your concerns have been addressed. Once again, we sincerely appreciate the time and effort you have put into reviewing our paper.\"}", "{\"comment\": \"Thank you for your response. I will increase my score.\"}" ] }
CI4sCBMXjP
ELICIT: LLM Augmentation Via External In-context Capability
[ "Futing Wang", "Jianhao Yan", "Yue Zhang", "Tao Lin" ]
Enhancing the adaptive capabilities of large language models is a critical pursuit in both research and application. Traditional fine-tuning methods require substantial data, computational resources, and specific capabilities, while in-context learning is limited by the need for appropriate demonstrations and efficient token usage. Inspired by the expression of in-context learned capabilities through task vectors and the concept of modular capability or knowledge, we propose ELICIT, a framework consisting of two modules designed to effectively store and reuse task vectors to enhance the diverse adaptive capabilities of models without additional training or inference tokens. Our comprehensive experiments and analysis demonstrate that our pipeline is highly transferable across different input formats, tasks, and model architectures. Externally storing and reusing vectors that represent in-context learned capabilities not only shows the potential to extract modular capabilities but also significantly enhances the performance, versatility, adaptability, and scalability of large language models, paving the way for more efficient and effective use of these models in a wide range of applications.
[ "modular" ]
Accept (Poster)
https://openreview.net/pdf?id=CI4sCBMXjP
https://openreview.net/forum?id=CI4sCBMXjP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zD4jdUSkvG", "wvUklJzh1K", "v5VwPbgkl8", "rQjn1yDWGp", "nbEMOkyCtE", "mdpElUecqo", "ksoKhbejaB", "ge71hRlISQ", "eG6ntbezqQ", "dwomf2a10v", "dCwNETctXu", "ZkDGez1zp8", "YYiQuMLUSE", "X76lmRfQ3P", "WI2n9xFTGM", "TYp3NOZ031", "SFDtDDSlSX", "QUPffa9jig", "JRwY8cC2vX", "I5a0efJGSG", "HKEDBVOvCh", "BPSXhFne3j", "BL44YutsoT", "9Jaqt2jx2N", "7c0ZFzRdOv", "2ywSgIjqGN", "1UwnmCfWSi", "0FWIm7s0XH" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732330752282, 1732330342900, 1737524031313, 1732331363471, 1733190441430, 1732600365204, 1732329276099, 1732328447546, 1732600974272, 1730554365529, 1732460753082, 1732600739067, 1734560263629, 1732330566908, 1732600801588, 1732329448896, 1732330203195, 1730322086213, 1732460847878, 1732847221759, 1732688269269, 1732570086395, 1730609941147, 1730699026752, 1732329026171, 1732681822100, 1732331678085, 1732329982272 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Reviewer_BquA" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Reviewer_yoPv" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Area_Chair_wSmS" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Reviewer_dv9u" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Reviewer_dv9u" ], [ "ICLR.cc/2025/Conference/Submission10183/Reviewer_nS2i" ], [ "ICLR.cc/2025/Conference/Submission10183/Reviewer_BquA" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Reviewer_nS2i" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ], [ "ICLR.cc/2025/Conference/Submission10183/Authors" ] ], "structured_content_str": [ "{\"title\": \"Respnose to Reviewer dv9u(2/4)\", \"comment\": \"> **W4:**\\u00a0Figure 3 illustrates the trade-off between stronger interventions and language modeling performance on WikiText, which is an expected observation since ICL and general language modeling operate with different circuits [3, 4, 5]. Having stronger interventions steers the activations further from the pretrained task, thus resulting in the worse performance, which is what previous works also showed. Authors do not analyze or explain this observation, but just comment how the strength of interventions affects the ICL and language modeling performances. I believe further discussion and explanation should be included.\\n> \\n\\nThank you for your valuable insights. This provides an interesting and plausible deeper understanding of the trade-off between stronger interventions and language modeling performance shown in Figure 3. We have added this possible interpretation in Appendix A.1. \\n\\nThe results emerge from scenarios that differ from traditional ICL understanding. Our focus is on determining optimal intervention strengths for task vectors to elicit a model's inherent capabilities\\u2014an aspect that previous work has not comprehensively explored. This unexplored dimension significantly impacts our pipeline's effectiveness.\\n\\n> **W5:**\\u00a0**The proposed method relies heavily on validation data to select optimal hyperparameters and determine the filtering threshold**. And the similarity-based model for task vector retrieval is further trained. Does relying on the validation tuning affects the scalability and efficiency? Can you please explain how this similarity model was trained, and with what data?\\n> \\n\\nThat\\u2019s good question! We agree that our capability library construction takes time and relies on validation data, while\\n\\n- This construction is a one-time process and **doesn\\u2019t impact the efficiency during test time.**\\n- Once constructed, task vectors can be **reused** during testing, improving overall efficiency.\\n- We believe ELICIT **have scalability potential** when provided with a sufficient quantity and diversity of task vectors in the capability library. Because\\n - ELICIT can generalize to unseen tasks as shown in Table 3 in the paper.\\n - In Q7, we demonstrate that three types of task vectors could boost performance on an unseen task input.\\n\\nInstead of training a similarity-based model for retrieval, we directly compute similarities between:\\n\\n- Query embedding $x \\\\in \\\\mathbb{R}^{L \\\\times d}$\\n- All task vectors $\\\\{\\\\theta_i\\\\}_{i=1}^{k \\\\times |\\\\mathcal{T}|}$ in the library, where $\\\\theta_i \\\\in \\\\mathbb{R}^{L \\\\times d}$\\n\\nThe similarity is computed using various metrics $f$, including:\\n\\n- Cosine similarity\\n- Euclidean distance\\n- t-SNE projection distance\\n\\nFor each query , we obtain a similarity list of length $k \\\\times |\\\\mathcal{T}|$, where each element is $f(x,\\\\theta_i)$. Prior to implementing these similarity metrics for retrieval, we evaluated their effectiveness through precision-recall AUC curves (shown in Figure 6). The analysis revealed that these similarity-based approaches demonstrate inadequate discrimination ability for identifying relevant task vectors from the library. This poor discriminative performance suggests that using these metrics for final retrieval and evaluation would not yield reliable or meaningful results.\\n\\n> **W6:**\\u00a0Tables 1 and 2 contain additional bolded entries, and captions are not descriptive enough, missing information about the sample size for the BM25 and evaluation in general. Further, figure 6 does not have a clear explanation of components labeled a, b, and c, while also missing a description in general. Finally, there is a typo in the appendix in the title for the Similarity based retrieval methods.\\n> \\n\\nThank you for your advice! We have made the modifications marked in red.\"}", "{\"title\": \"Response to Reviewer yoPv (2/2)\", \"comment\": \"**Questions:**\\n> Q1. Why isn't the experiment conducted on instruction-tuned models but base models?\\n> \\n- We conducted a preliminary experiment on Llama3-8B-Instruct. The results shown in Table X9 demonstrates that **ELICIT can potentially work with instruction-tuned model**s.\", \"table_x9\": \"The preliminary experiment of ELICIT on Llama3-8B-Instruct.\\n \\n | | nlu | reasoning | knowledge | safety | avg |\\n | --- | --- | --- | --- | --- | --- |\\n | zs | 45.0 | 4.9 | 31.9 | 42.5 | 31.1 |\\n | Ours | **52.7** | **36.2** | **70.9** | **49.0** | **52.2** |\\n- We initially excluded instruction-tuned models due to **their sensitivity to prompts**, which would significantly increase the **complexity** of our experimental setup.\\n - **Evidence of sensitivity of Instruction-based model**: As demonstrated by prior work ([3,4]) and our additional case experiment with Llama3-8B-Instruct (Table X10), instruction-tuned models exhibit substantial variations in zero-shot performance based on prompt formatting or rephrasing.\\n - **This challenge is not related to our focus and will make experiments complex**. While our pipeline could be extended to instruction-tuned models, it would require additional considerations, particularly in the initial building stage where we need to identify effective task vectors for in-context learning (ICL). The challenge lies in determining optimal methods for concatenating examples and prompts for instruction-tuned models. By focusing on base models, we can conduct a more **straightforward analysis of our pipeline**'s effectiveness.\", \"table_x10\": \"A case demonstrating Llama3-8B-Instruct's sensitivity to prompts.\\n\\n| input | output |\\n| --- | --- |\\n| prompt=\\\"<\\\\|begin_of_text\\\\|><\\\\|start_header_id\\\\|>system<\\\\|end_header_id\\\\|>\\\\n\\\\n**You are a pirate chatbot who always responds in pirate speak!**<\\\\|eot_id\\\\|><\\\\|start_header_id\\\\|>user<\\\\|end_header_id\\\\|>\\\\n\\\\n\\u2026 input\\u2026<\\\\|eot_id\\\\|><\\\\|start_header_id\\\\|>assistant<\\\\|end_header_id\\\\|>\\\\n\\\\n\\\" | Arrr, shiver me timbers! Yer |\\n| \\\"<\\\\|begin_of_text\\\\|><\\\\|start_header_id\\\\|>system<\\\\|end_header_id\\\\|>\\\\n\\\\n**You are a helpful assistant.**<\\\\|eot_id\\\\|><\\\\|start_header_id\\\\|>user<\\\\|end_header_id\\\\|>\\\\n\\\\n\\u2026input\\u2026<\\\\|eot_id\\\\|><\\\\|start_header_id\\\\|>assistant<\\\\|end_header_id\\\\|>\\\\n\\\\n\\\" | B |\\n\\n[3] [Evaluating the zero-shot robustness of instruction-tuned language models](https://arxiv.org/abs/2306.11270)\\n\\n[4] [Roles of Scaling and Instruction Tuning in Language Perception: Model vs. Human Attention](https://aclanthology.org/2023.findings-emnlp.868/)\\n\\n> Q2. Did you compute the decrease in inference efficiency caused by the introduction of a new module?\\n> \\n\\nIn fact,\\n\\n- **quantitative results demonstrate the efficiency of our method even after introducing the retrieval module.** Using the Pythia-6.9B model, we measured the average processing time per sample across different pipeline stages (Table X11).\\n - Results show that our retrieval module introduces minimal computational overhead, **adding only 0.105 seconds** on average.\\n - The total ELICIT inference time, including retrieval, remains efficient at **0.172 seconds** per sample. Compared to baselines, ELICIT processes samples **2-3 times faster** than 16-shot inference or BM25 inference time.\\n\\nWe have added add this analysis in Appendix N.\", \"table_x11\": \"The running time of different stages per sample across different domains.\\n\\n| | **zs inference time** | **ELCIT inference time** | **retrieve time** | **bm25 inference time** | **16shot inference time** |\\n| --- | --- | --- | --- | --- | --- |\\n| **nlu** | 0.063 | 0.064 | 0.097 | 0.302 | 0.181 |\\n| **reasoning** | 0.065 | 0.066 | 0.104 | 0.349 | 0.315 |\\n| **knowledge** | 0.066 | 0.069 | 0.108 | 0.517 | 0.371 |\\n| **math** | 0.065 | 0.067 | 0.111 | 0.351 | 0.352 |\\n| **safety** | 0.067 | 0.069 | 0.104 | 0.611 | 0.366 |\\n| **avg** | 0.065 | 0.067 | 0.105 | 0.426 | 0.317 |\", \"our_work_advances_a_novel_vision_for_improving_llm\": \"flexibly eliciting a model's relevant capabilities in response to arbitrary queries without requiring prior task identification. While previous task vector research has focused on known, single-task applications, we build a capability library that enables dynamic reuse of task vectors to explore the vision.\\n\\nFollowing your feedback, we have conducted comprehensive additional experiments with larger models and instruction-tuned models, and included detailed runtime analyses. And we updated the content accordingly. ***We hope these results resolve your concerns and will be really appreciated if you re-consider the score of our paper.***\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Respnose to Reviewer dv9u (3/4)\", \"comment\": [\"> **Q1**: Have you tried aggregating task vectors by task, and if so, what were the results?\", \">\", \"Thank you for you advice!\", \"We conducted preliminary experiments where we represent each task vector averaging by task with a single random ICL prompt. Then our grouped capability library consists of $(\\\\hat{p}, \\\\bar{\\\\theta}, l^*)$), effectively reducing our task vectors from $|k \\\\times \\\\mathcal{T}|$ to $| \\\\mathcal{T} |$.\", \"**Comparison Methods**: We compare\", \"zero-shot baseline\", \"*ELICIT (group+top-1)*: Use grouped capability library and choose top-1 task vector\", \"*ELICIT (group+top-2)*: Use grouped capability library and choose top-2 task vectors\", \"*ELICIT*: The original ELICIT implementation (without grouping)\", \"**Results**: The results on Pythia-6.9b showed in Table X12. **We find that our original implementation performs better overall.** Moreover, we observed that using grouped capability library harms performance in Knowledge and NLU tasks.\", \"We did not average task vectors per task because **it would compromise our retrieval-based design.**\", \"Since our system handles queries from **unknown tasks**, we **require a retrieval module** to identify suitable task vectors from our capability library.\", \"Our retriever works by matching queries against individual ICL prompts to find the most similar examples. Averaging task vectors by task category would **break the one-to-one correspondence** between specific ICL prompts and their task vectors, undermining the retriever's ability to make precise matches.\"], \"table_x12\": \"The results of ELICIT using grouped capability library. The experiments are based on Pythia-6.9B.\\n\\n| | nlu | reasoning | knowledge | math | safety | avg |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| zs | 38.0 | 16.0 | 16.7 | 5.9 | 31.9 | 21.7 |\\n| ELICIT (group+top-1) | *37.1* | 26.8 | 28.6 | **15.9** | 49.0 | 31.5 |\\n| ELICIT (group+top-2) | *37.1* | 27.2 | *13.3* | 14.4 | 49.1 | 28.2 |\\n| ELICIT | **39.2** | **27.4** | **29.1** | 15.3 | **49.6** | **32.1** |\\n\\n> **Q2:** Have you considered using multiple layers to represent task vectors instead of relying on a single optimal layer? If yes, how did this affect performance?\\n> \\n- **Multiple layer intervention shows promise**. It\\u2019s an interesting idea! We conducted preliminary experiments exploring this idea, where the intervention strength $\\\\alpha=2$ was distributed equally across layers:\\n - **Comparison methods**: We conducted our experiments including the following settings:\\n - zero-shot baseline\\n - *ELICIT(1 layer)*: the original single-layer ELICIT implementation\\n - *ELICIT(3 layer)*: ELICIT intervention on 3 layers (centered on the optimal layer)\\n - *ELICIT(all layers)*: ELICIT intervention on all layers\\n - **Results**: As shown in Table X4, the results from Llama3-8B demonstrate an interesting trend: increasing the number of layers involved in the intervention tends to improve overall performance. A deeper and more comprehensive investigation into this phenomenon remains an interesting direction for future research.\\n\\nWe have added these results in Appendix M.\", \"table_x4\": \"Comparison of multiple intervention layers on ELICIT. The experiments are conducted on Llama3-8B.\\n| | nlu | reasoning | knowledge | math | safety | avg |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| zs | 32.4 | 31.8 | 42.8 | 15.4 | 36.6 | 31.8 |\\n| ELICIT(1 layer) | 38.3 | 46.9 | 60.7 | 20.6 | 51.1 | 43.5 |\\n| ELICIT(3 layers) | 38.2 | **47.1** | 61 | 21.6 | 51.6 | 43.9 |\\n| ELICIT(all_layer) | **40.9** | 46.3 | **61.4** | **21.7** | **52.4** | **44.5** |\\n\\n> **Q3:** Can you please clarify if only the most similar task vector is used for intervention in the end? If so, does this mean many task vectors remain unused?\\n> \\n\\nThank you for your question! We build a capability library containing various task vectors. Thus, \\n\\n- **For one query, only the similar task vectors are used during inference.** When processing an arbitrary query without explicit task information, we dynamically retrieve the most relevant task vector. The most similar task vectors are then used to elicit the model's inherent capability for that specific type of task.\\n- **Different queries will trigger different task vectors from the library based on their specific requirements.** The capability library serve as a diverse pool of capabilities that can be activated for different types of queries. As shown in Figure 16 in Appendix J, we observe that 20 distinct types of task vectors are utilized after in-domain and out-of-domain (OOD) evaluations.\\n\\nWe have added this result in Appendix J.\"}", "{\"comment\": \"Dear Reviewer yoPv,\\n\\nWe have revised the paper and added many additional results to address your comments. Since the rebuttal period is closing very soon, can you please check the response to see whether it mitigates your concerns? We would greatly appreciate that!\\n\\nThank you,\\n\\nAuthors of Paper 10183\"}", "{\"comment\": \"Thank you for the clarifications and for your efforts on extensive additional experiments. My concerns are resolved.\"}", "{\"title\": \"Response to Reviewer BquA (2/3)\", \"comment\": \"> Q2. Figure 5 shows that ELICIT boosts performance on relevant tasks while minimally compromising performance on non-related tasks. Is this because task vectors are not applied for unrelated tasks, or does the model perform well even when unrelated task vectors are given?\\n> \\n\\n**This phenomenon occurs because task vectors are not applied for unrelated queries. And when we forcibly applying task vectors to all queries can actually harms model performance.** We conducted additional experiments:\\n\\n- We calculate the number of chosen states per domain for each sample when the library only contains math-related task vectors on Mistral, the results are presented in Table X2.\\n - **Results**: The result shows that math-related tasks consistently employ a high number of chosen states (9.8 \\u00b1 0.1), while other domains show minimal state selection (close to 0.0). This result demonstrates that the behavior observed in Figure 5 arises from ELICIT's ability to dynamically retrieve and reuse task vectors from the capability library, enabling **selective activation** of relevant capabilities.\\n - **Case Study on Unrelated domains**: We observed minor improvements in reasoning tasks, exemplified by this ARC Challenge case. It demonstrates our pipeline's ability to selectively activate relevant capabilities based **solely on query and handle unseen tasks flexibly, without requiring explicit task information.**\\n \\n| | |\\n| --- | --- |\\n| input | Below are multiple-choice science questions. Answer with 'X', X being the correct option.\\\\n\\\\nQuestion: An unbalanced equation for the reaction of methane gas (CH_{4}) with oxygen is shown below. CH_{4} + \\\\\\\\Box O_{2} -> 2CO_{2} + 4H_{2}O How many molecules of oxygen gas (O_{2}) are needed to properly balance this equation?\\\\nOptions:\\\\nA. 1\\\\nB. 2\\\\nC. 3\\\\nD. 4\\\\nAnswer: |\\n| chosen task vectors | 10 task vectors from MathQA |\\n| Original Output | B |\\n| ELICIT Output | **D (correct)** |\\n| | |\\n- **Forcibly applying the top task vectors for each query can harm performance.** We conduct experiments on Mistral (Table X3) showed that this approach led to significant declines in NLU and knowledge performance.\\n\\nThese experimental results demonstrate that ELICIT's performance improvement stems from its selective activation mechanism and the importance of selectively using only task-relevant vectors to dynamically activate capabilities.\\n\\nWe have added these results in Appendix J and add description in main content Line 456 to clarify that.\", \"table_x2\": \"The average number of chosen numbers per domain per sample. The statistics come from Mistral when the capability library only contains math-related task vectors.\\n\\n| | in-domain | | | | |\\n| --- | --- | --- | --- | --- | --- |\\n| | **NLU** | **Reasoning** | **Knowledge** | **Math** | **Safety** |\\n| chosen nums | 0.0 \\u00b1 0.0 | 0.1 \\u00b1 0.0 | 0.0 \\u00b1 0.0 | **9.8 \\u00b1 0.1** | 0.0 \\u00b1 0.0 |\\n| | **Out-of-domain** | | | | |\\n| | **GLUE COLA** | **BBQ Religion** | **Deepmind** | **MMLU-Psychology** | **BBH-five-objects** |\\n| chosen nums | 0.0 \\u00b1 0.0 | 0.0 \\u00b1 0.0 | **9.9 \\u00b1 0.1** | 0.0 \\u00b1 0.0 | 0.0 \\u00b1 0.0 |\", \"table_x3\": \"The results of forcibly applying the top task vectors for each query. The experiments were conducted on Mistral. Domains with degraded performance are marked in *italics*.\\n| | nlu | reasoning | knowledge | math | safety |\\n| --- | --- | --- | --- | --- | --- |\\n| zs | 28.8 | 27.4 | 58.8 | 4.0 | 42.2 |\\n| Ours | *15.7* | 31.4 | *47.8* | 18.3 | 53.1 |\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely appreciate all the reviewers for their time and insightful reviews. We are encouraged that the reviewers acknowledged our presentation is clear and easy to follow (Reviewers BquA, yoPv, dv9u), the motivation is straightforward (Reviewer BquA), the improvements on zero-shot performance are significant and consistent (Reviewers nS2i, yoPv, dv9u), the proposed method is efficient and flexible (Reviewers BquA, nS2i), and the experiments are extensive (Reviewer nS2i).\", \"we_are_particularly_grateful_that_reviewers_recognized_our_core_contributions\": \"advancing novel approaches to improve LLM capabilities (Reviewer nS2i) and exploring a promising new direction in the field (Reviewer dv9u).\\n\\n### Summary of Contribution and Novelty\\n\\nWe believe that our work makes concrete contributions to the community.\\n\\nTo clarify, our work presents a novel, modular framework ELICIT for enhancing the adaptive capabilities of LLMs on demand with minimal computational overhead, which approaches our envisions that ***LLMs should be capable of using its own best capabilities when solving problems during test time,*** aligning with the recent trend of test-time scaling for language models [1, 2].\\n\\nWith our capability library, our framework can reuse task vectors for arbitrary query **without requiring prior task identification**, which is different from previous work. ***This framework and vision represent a significant step*** forward in making language models more adaptable and efficient in real-world applications.\\n\\n[1] [Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters](https://arxiv.org/abs/2408.03314)\\n\\n[2] [O1 Replication Journey: A Strategic Progress Report \\u2013 Part 1](https://arxiv.org/abs/2410.18982)\\n\\n### Summary of Additional Results During Discussion Period\", \"we_conducted_extensive_supplementary_experiments_covering\": [\"**Model scaling and Diversity (Appendix K)**\", \"An additional base model (Pythia-6.9B)\", \"Larger models (Pythia-12b and Llama3-70B)\", \"An instruction-tuned model (Llama3-8B-Instruct)\", \"**More tasks (Appendix K)**\", \"A complex task (GSM8K)\", \"A Specialized Out-Of-Domian task (MMLU-Professional Law)\", \"**Deeper Analysis of framework**\", \"Analysis of task vectors usage behavior. (Appendix J)\", \"Forcibly applying the top task vectors for each query. (Appendix J)\", \"Efficiency analysis of adding a retrieval module. (Appendix N)\", \"**Integration with one more existing Technique**\", \"16-shot ICL + ELICIT.\", \"**The potential way to improve framework**\", \"Results on Diversity-optimized capability library. (Appendix L)\", \"Multi-layer intervention. (Appendix M)\", \"Results on task-grouped capability library.\", \"We sincerely thank all reviewers for their insightful suggestions. These results have been added to the appendix accordingly, and we have addressed the writing-related feedback in the paper (highlighted in red).\"]}", "{\"comment\": \"Thank you for your valuable time and support in reviewing our manuscript. We are grateful for your positive evaluation and decision.\"}", "{\"summary\": \"The article introduces ELICIT, a novel framework to enhance the adaptive capabilities of large language models (LLMs) without the need for extensive fine-tuning or in-context learning demonstrations. ELICIT consists of two key modules: a capability library that stores task vectors representing various in-context learned capabilities, and a dynamic retrieval module that selectively activates these vectors based on input queries. Experimental results demonstrate the effectiveness of the model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The new framework with the use of task vector demonstrates effectiveness in improving zero-shot performance.\\n2. The paper is generally well written and easy for readers to understand.\", \"weaknesses\": \"1. The novelty is limited in some aspects, including the use of task vector and the retrieval module.\\n2. Experiments on different models of different sizes should be conducted as the study would better demonstrate that this method is also effective for large models.\\n3. More comprehensive experiments on more datasets are expected, such as MMLU, GSM8K, HumanEval, etc.\", \"questions\": \"1. Why isn't the experiment conducted on instruction-tuned models but base models?\\n2. Did you compute the decrease in inference efficiency caused by the introduction of a new module?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We have submitted our revised manuscript along with additional experiments and responses to the questions. We kindly wanted to remind you, in case the notification was missed, and would greatly appreciate any updates on the responses. Thank you for your time!\"}", "{\"title\": \"A Kind Reminder for Reviewer yoPv\", \"comment\": \"Dear Reviewer yoPv,\\n\\nWe would like to express our sincere gratitude for your thorough and insightful feedback regarding our manuscript. In response to the specific points you have raised, we have provided comprehensive explanations addressing each concern in detail. We summarize your questions and our key responses:\\n\\n- **[W1: Novelty of modules]:** We clarify our core contribution is the novel, modular framework for enhancing the adaptive capabilities of LLMs in low-computation, exploring our envisions that ***LLMs should be capable of using its own best capabilities when solving problems.*** Compared with previous work, ELICIT can reuse task vectors for arbitrary query without prior task information. This framework and vision represent **a significant step** forward in efficient LLM adaptation.\\n- **[W2 & Q1: Effectiveness in larger models and instruction-tuned models]** We have conducted experiments to evaluate the effectiveness of ELICIT on more **diverse models** (Pythia-12B, Llama3-70B, Llama3-8B-Instruct). The results demonstrate that ELICIT can enhance the capabilities of these models. Additionally, we explain why we didn\\u2019t experiment on instruction-tuned models.\\n- **[W3: Effectiveness on more datasets]** We have conducted experiments to assess the effectiveness on more datasets (GSM8K, MMLU-Professional-Law). The results demonstrate the **consistent improvements** of ELCIT on these datasets.\\n- **[Q2: Inference Efficiency of Retrieval module]** We have analyzed the cost of retrieval module. The results demonstrate that the **minimal cost** (0.172 seconds) and together with inference, ELICIT is still **2-3 times** **faster** than baselines.\\n\\nWe've incorporated your valuable suggestions into the revised manuscript and truly appreciate your thoughtful feedback.\\n\\nIf you feel our responses have adequately addressed your concerns, we'd be grateful if you could consider revising the score. An improved score would be particularly important for our work at this stage.\\n\\nWe're happy to address any additional questions or comments you might have. Your detailed review has helped us significantly improve our work, and we really appreciate the time you've invested. Looking forward to your feedback\\n\\nBest regards,\\n\\nAuthors of Paper 10183\"}", "{\"metareview\": \"This paper proposes a new approach that improves LLMs capabilities by introducing an external ICL capacity library.\\n\\nThree reviewers support the contributions of this paper with clear cceptance scores while one reviewer gave clear rejection.\\n\\nAC carefully read the paper, reviewers' comments and author feedback.\\n\\nWhile R-yopv pointed out major concerns such as limited novelty and insufficient experiments, the authors provide the additional experimental results on those issues. Although yopv does not show his/her feedback, AC thinks that the authors successfully addressed the concerns. \\n\\nSo, AC recommends accepting this paper.\", \"additional_comments_on_reviewer_discussion\": \"The initial scores were 8, 8, 3, and 5.\\n\\nNegative reviwers raised some major concerns related to experiments with restricted novelty.\\n\\nDuring the rebuttal period, the authors presented extensive addtional experimental results, so addressed the concerns of nS2i.\\nThus, nS2i increased his score from 5 to 8.\\n\\nyoPv did not show any response on the authors' comments.\\n\\nThe final scores are 8, 8, 3, and 8.\"}", "{\"title\": \"Respnose to Reviewer dv9u (1/4)\", \"comment\": \"Thank you for your time and insightful review. Following your insightful suggestions,\\n\\n1. We have conducted additional experiments, including implementing grouped ELICIT, testing with larger models, improving ELICIT by multi-layer intervention, and more complex task.\\n2. We have also updated our content to address your feedback regarding data curation, deeper interpretation and more complete related work. \\n\\nHope these responses solve your concerns.\\n\\n**Weakness**\\n> **W1:**\\u00a0The paper lacks specifics on dataset curation regarding the train-val-test splits and sizes, as well as the sample sizes used to calculate the performance. Can you please clarify how many examples were used for the task calculation?\\n> \\n\\nThank you for your suggestions! We followed a systematic approach for dataset splits, with two key principles:\\n\\n1. Maintain a minimum test set size of 80 samples for evaluation.\\n2. Ensure train sets have at least 128 samples to enable ICL prompt construction\\n\\n**Our specific splitting strategies were as follows:**\\n\\n1. Pre-existing Splits: For datasets like ARC-Challenge, Ethics, GLUE, MathQA, and OpenbookQA, we preserved their original train-val-test splits.\\n2. Train-Valid Only Datasets:\\n - For datasets with validation sets > 350 samples (e.g., BoolQ, Hellaswag): Split validation into new valid-test sets (7:3)\\n - For datasets with validation sets < 350 samples (e.g., CommonsenseQA): Split train set into train-test sets (7:3)\\n3. Test-Only Datasets:\\n - Small test sets (e.g., BBH with 250 samples):\\n - Train: 128 samples\\n - Test: 80 samples\\n - Remaining samples allocated to validation\\n - Large test sets (>1000 samples, e.g., MMLU-Pro-Math, BBQ, Crows Pairs):\\n - Split into train-valid-test (7:2:1)\\n4. Train-Only Datasets (e.g., SuperGLUE, DeepMind): Split into train-valid-test (7:2:1)\\n\\nWe also added these details in Appendix I with red color. During library building we sampled from valid set to determine important hyparameters, and for evaluation we sampled 100 examples from the test set each time.\\n\\n> **W2:**\\u00a0ELICIT selects the optimal layer for task vectors for each given task and prompt, but assuming that the whole task encoding is stored just within a single layer may not be optimal. Have the authors tested **multi-layer interventions**, and if so, how did they compare? **Moreover, could grouping and averaging task vectors per task simplify optimization steps and yield efficient task vector retrieval without extra steps?** (Q1+Q2)\\n> \\n\\nSee response to Q1 and Q2.\\n\\n> **W3:**\\u00a0The paper does not compare with previous task vector approaches mentioned in the related work (Hendel et al., Todd et al., Liu et al., Yan et al.), as well as with other parameter-efficient fine-tuning methods based on modularity (also mentioned in the **related work like Hu et al.**) with 16 shots. I believe such comparisons should be performed and included in the paper. Additionally, the paper does not mention task vector work for visual ICL and multi-modal ICL [1, 2] and including these would provide a comprehensive context and overview of the field.\\n> \\n\\nThank you for your advice. We excluded comparisons with other task vector approaches and PEFT methods since they are **orthogonal to our work** and serve as **alternative techniques** for creating capability library. We chose task vectors for their **simplicity and efficiency**, requiring only basic forward passes. In contrast:\\n\\n- [function vectors](https://arxiv.org/abs/2310.15213) require calculating significance metrics for all attention heads.\\n- I[n-context vectors](https://arxiv.org/abs/2311.06668) need contrastive datasets to extract desired behavioral directions.\\n- [state vectors](https://arxiv.org/abs/2404.11225) demand optimization.\\n- PEFT methods with individual LoRA adaptors per task are more memory-intensive and need training.\\n\\nThese methods involve more complex computations or modifications. Due to the time limit in the discussion period, we believe these methods can extend our work in the future.\\n\\nWe also appreciate the suggestion to include visual and multi-modal ICL task vector research in our related work, which would provide broader context for the field. We added in Line 105.\"}", "{\"comment\": \"Thank you for your valuable time and support in reviewing our manuscript. We are grateful for your positive evaluation and decision.\"}", "{\"title\": \"Response to Reviewer BquA (3/3)\", \"comment\": [\"> Q3. The current framework augments (at most) a single task vector at a single layer. Could this be extended to multiple vectors or layers to improve performance? (e.g., in cases where a given query is highly relevant to two different task vectors)\", \">\", \"Thank you for advice! We consider for these two potential improvements as following:\", \"**Appropriate numbers of chosen task vectors are important**. As mentioned in Line 340, we selected n=10 task vectors from the library during evaluation, enabling selection from different tasks. A simple ablation study in Section 5.2 demonstrates that n=10 yields optimal performance compared to other values.\", \"**Multiple layer intervention shows promise**. We conducted preliminary experiments to explore this idea, where the intervention strength $\\\\alpha=2$ was distributed equally across layers:\", \"**Comparison methods**: We conducted our experiments including the following settings:\", \"*zs*: zero-shot baseline\", \"*Ours (1 layer)*: the original single-layer implementation\", \"*Ours (3 layers)*: intervention on 3 layers (centered on the optimal layer)\", \"*Ours (all_layer)*: intervention on all layers\", \"**Results**: As shown in Table X4, the results from Llama3-8B demonstrate an interesting trend: increasing the number of layers involved in the intervention tends to improve overall performance. A deeper and more comprehensive investigation into this phenomenon remains an interesting direction for future research.\", \"We have added these results in Appendix M.\"], \"table_x4\": \"Comparison of multiple intervention layers on ELICIT. The experiments are conducted on Llama3-8B.\\n| | nlu | reasoning | knowledge | math | safety | avg |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| zs | 32.4 | 31.8 | 42.8 | 15.4 | 36.6 | 31.8 |\\n| Ours (1 layer) | 38.3 | 46.9 | 60.7 | 20.6 | 51.1 | 43.5 |\\n| Ours (3 layers) | 38.2 | **47.1** | 61 | 21.6 | 51.6 | 43.9 |\\n| Ours (all_layer) | **40.9** | 46.3 | **61.4** | **21.7** | **52.4** | **44.5** |\\n\\n> Q4. Tables 2 and 3 show that ELICIT performs significantly better than the ICL baselines for Pythia and Mamba. What would be the reason for this? For example, can ELICIT be more beneficial for models with weaker capabilities? Or could this be related to a specific training recipe, as Pythia and Mamba are trained under the same setup?\\n> \\n\\nIn principle, this observation should **relate to the model\\u2019s weak capability of incorporating contextual information.** To further investigate the reason, we conduct scaling experiments on larger versions of Pythia, ranging from 2.8b to 12b. \\n\\n- **Results**: The results are shown in Table X5. We find that even though the ICL performance increases with the increases of sizes, our method still outperforms the ICL baseline for all sizes. We cannot draw a conclusion now based on our results, and these results show that at least the reason **cannot be simply explained by the scaling of model sizes**. As various version of Pythia intuitively would follow the same training recipe, **we now incline to the hypothesis that a certain setup leads to weak contextual capability.**\\n\\nThis is an intriguing question that requires deeper investigation and better understanding. We believe this is a valuable direction for future research.\", \"table_x5\": \"The results of ELICIT on Pythia series ranging from 2.8b to 12b.\\n| | | Length | nlu | reasoning | knowledge | math | safety | avg |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Pythia-2.8b | 16-shot | 1597.6 \\u00b1 2.4 | 48.1 \\u00b1 0.4 | 22.2 \\u00b1 0.8 | 12.5 \\u00b1 0.7 | 10.3 \\u00b1 0.9 | 28.2 \\u00b1 0.9 | 24.3 \\u00b1 0.4 |\\n| | ELICIT | 109.8 \\u00b1 1.5 | **60.1 \\u00b1 0.1** | **25.7 \\u00b1 0.9** | **20.9 \\u00b1 1.2** | **14.4 \\u00b1 1.3** | **40.9 \\u00b1 2.5** | **32.4 \\u00b1 0.4** |\\n| Pythia-6.9b | 16shots | 1598.4 \\u00b1 2.0 | 27.3 \\u00b1 0.3 | 27.3 \\u00b1 0.3 | 27.3 \\u00b1 0.3 | **27.3 \\u00b1 0.3** | 27.3 \\u00b1 0.3 | 27.3 \\u00b1 0.3 |\\n| | ELICIT | 109.8 \\u00b1 1.5 | **38.7 \\u00b1 1.4** | **28.1 \\u00b1 0.5** | **27.9 \\u00b1 1.0** | 18.2 \\u00b1 2.6 | **47.8 \\u00b1 2.0** | **32.2 \\u00b1 0.7** |\\n| Pythia-12b | 16shots | 1598.2 \\u00b1 1.4 | **47.8 \\u00b1 1.2** | 15.8 \\u00b1 0.4 | 23.7 \\u00b1 0.3 | 14.3 \\u00b1 1.5 | 35.0 \\u00b1 1.9 | 27.3 \\u00b1 0.3 |\\n| | ELICIT | 109.8 \\u00b1 1.5 | 38.5 \\u00b1 0.5 | **29.7 \\u00b1 0.7** | **29.8 \\u00b1 0.6** | **17.5 \\u00b1 2.1** | **46.8 \\u00b1 0.2** | **32.5 \\u00b1 0.5** |\"}", "{\"title\": \"Response to Reviewer yoPv (1/2)\", \"comment\": \"Thank you for your time and reviews.\\n\\n**Weaknesses:**\\n\\n> W1. The novelty is limited in some aspects, including the use of task vector and the retrieval module.\\n> \\n\\nWe believe that our work makes concrete contributions to the community. To clarify, \\n\\n- Our work presents **a novel, modular framework** for enhancing the adaptive capabilities of LLMs on demand with minimal computational overhead, which approaches our **envisions** that ***LLMs should be capable of using its own best capabilities when solving problems.***\\n - The advantages of framework are validated though extensive experiments in our paper and acknowledged by all other Reviewers BquA, nS2i, dv9u.\\n- **Different with previous work.** The line of work of task vectors focuses on single known task. In contrast, with our capability library, we now can **reuse** task vectors for arbitrary query without needing know what the task is.\\n- This framework and vision represent **a significant step** forward in making language models more adaptable and efficient in real-world applications, acknowledged by Reviewers **dv9u and nS2i.**\\n\\n> W2. Experiments on different models of different sizes should be conducted as the study would better demonstrate that this method is also effective for **large models**.\\n> \\n\\nThank you for your advice! We conduct additional experiments on **larger models** Pythia-12B and Llama3-70B.\\n\\n- **Results**: The results are shown in the Table X7. We can find ELICIT is also effective for larger models.\", \"table_x7\": \"ELICIT performance on Pythia-12B and Llama3-70B. Pythia-12B on three seed while Llama 70B using 1 seed.\\n\\n| | | **Length** | **nlu** | **reasoning** | **knowledge** | **math** | **safety** | **avg** |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| **Pythia-12B** | 16shots | 1598.2 \\u00b1 1.4 | 47.8 \\u00b1 1.2 | 15.8 \\u00b1 0.4 | 23.7 \\u00b1 0.3 | 14.3 \\u00b1 1.5 | 35.0 \\u00b1 1.9 | 27.3 \\u00b1 0.3 |\\n| | bm25 | 2184.2 \\u00b1 29.7 | 42.6 \\u00b1 1.4 | 20.3 \\u00b1 0.2 | 21.0 \\u00b1 0.4 | 14.0 \\u00b1 1.5 | 30.1 \\u00b1 0.8 | 25.6 \\u00b1 0.4 |\\n| | zs | 109.8 \\u00b1 1.5 | 34.7 \\u00b1 0.6 | 20.7 \\u00b1 0.2 | 18.1 \\u00b1 0.6 | 7.9 \\u00b1 1.7 | 34.6 \\u00b1 0.6 | 23.2 \\u00b1 0.2 |\\n| | ELICIT | 109.8 \\u00b1 1.5 | **38.5 \\u00b1 0.5** | **29.7 \\u00b1 0.7** | **29.8 \\u00b1 0.6** | **17.5 \\u00b1 2.1** | **46.8 \\u00b1 0.2** | **32.5 \\u00b1 0.5** |\\n| **Llama3-70B** | bm25 | 1823.1 | 42.8 | 78.1 | 80.9 | 49.2 | 71.5 | 64.5 |\\n| | 16shot | 1411.5 | 40.1 | 76.3 | 83.3 | 51.4 | 70.9 | 64.4 |\\n| | zs | 101.1 | 50.9 | 66.8 | 59.7 | 37.6 | 44.2 | 51.8 |\\n| | ELICIT | 101.1 | **55.9** | **80.5** | **84.6** | **52.4** | **67.4** | **68.2** |\\n\\n> W3. More comprehensive experiments on **more datasets a**re expected, such as MMLU, GSM8K, HumanEval, etc.\\n> \\n\\nThank you for your advice! We conducted additional experiments on more datasets:\\n\\n- We expand our existing capability library of Llama3-8B using GSM8K, which now consists of 21 tasks, and conduct experiments on GSM8K (in-domain) and a subset of MMLU (professional law as out-of-domain).\\n- **Results**: We find that we could improve the performance on these two datasets while maintaining improvements on the other 25 tasks (20 in-domain tasks in Table 3 and 5 out-of-domain tasks in Table 4) both zero-shot and few-shot scenarios. It also shows the scalability of capability library.\", \"table_x8\": \"The results of ELICIT on GSM8K and MMLU-Professional-Law on Llama3-8B. GSM8K is as in-domain task and MMLU-Profeesional-Law is out-of-domain.\\n| | gsm8k (in-domain) | mmlu-professional-law(ood) |\\n| --- | --- | --- |\\n| **zs** | 30.44 | 31.67 |\\n| **ELICIT** | **32.44** | **41.11** |\\n| **16shot** | 42.87 | 30.78 |\\n| **16shot+ELCIT** | **43.22** | **41.89** |\"}", "{\"summary\": \"This paper introduces ELICIT, a novel framework for adapting Large Language Models (LLMs) to diverse tasks, using task vectors and in-context learning (ICL). Inspired by the concept of modularization, ELICIT comprises two core components\\u2014Build Capability Library and Dynamic Capability Elicitation. By building a library of task vectors, each representing one in-context capability, ELICIT dynamically leverages this library to selectively retrieve and activate capabilities based on any given query, ensuring efficient and flexible elicitation of the model\\u2019s capabilities.\\n\\nTo build the capability library, task vectors are stored across layers for each task, along with prompts for future reuse, and the position of the optimal layer, which is determined based on the hold-out validation set and zero-shot inferences with task vector interventions to the model. Interventions within the model can be as direct replacement or a linear combination, with the latter shown to perform better. Later, task vectors are retrieved with the Dynamic Capability Elicitation module that employs a SimCSE RoBERTa model for relevant task vector selection and a threshold-based filtering approach based on AUC scores from the validation set.\\n\\nELICIT shows good performance across 20 ICL tasks and four models, outperforming other baselines across tasks, models and query formats, while also showing good generalization on unseen tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**S1:** Paper explores a novel and promising direction for in-context learning by leveraging task vectors with an external modular concept which is really interesting and aligns well with the recent work in the field of in-context learning and task-vectors.\", \"**S2:** Proposed method ELICIT demonstrates strong performance and generalization across diverse tasks and models.\", \"**S3:** This paper has a clear mathematical presentation with good explanations of ICL and task vectors.\"], \"weaknesses\": [\"**W1:** The paper lacks specifics on dataset curation regarding the train-val-test splits and sizes, as well as the sample sizes used to calculate the performance. Can you please clarify how many examples were used for the task calculation?\", \"**W2:** ELICIT selects the optimal layer for task vectors for each given task and prompt, but assuming that the whole task encoding is stored just within a single layer may not be optimal. Have the authors tested multi-layer interventions, and if so, how did they compare? Moreover, could grouping and averaging task vectors per task simplify optimization steps and yield efficient task vector retrieval without extra steps?\", \"**W3:** The paper does not compare with previous task vector approaches mentioned in the related work (Hendel et al., Todd et al., Liu et al., Yan et al.), as well as with other parameter-efficient fine-tuning methods based on modularity (also mentioned in the related work like Hu et al.) with 16 shots. I believe such comparisons should be performed and included in the paper. Additionally, the paper does not mention task vector work for visual ICL and multi-modal ICL [1, 2] and including these would provide a comprehensive context and overview of the field.\", \"**W4:** Figure 3 illustrates the trade-off between stronger interventions and language modeling performance on WikiText, which is an expected observation since ICL and general language modeling operate with different circuits [3, 4, 5]. Having stronger interventions steers the activations further from the pretrained task, thus resulting in the worse performance, which is what previous works also showed. Authors do not analyze or explain this observation, but just comment how the strength of interventions affects the ICL and language modeling performances. I believe further discussion and explanation should be included.\", \"**W5:** The proposed method relies heavily on validation data to select optimal hyperparameters and determine the filtering threshold. And the similarity-based model for task vector retrieval is further trained. Does relying on the validation tuning affects the scalability and efficiency? Can you please explain how this similarity model was trained, and with what data?\", \"**W6:** Tables 1 and 2 contain additional bolded entries, and captions are not descriptive enough, missing information about the sample size for the BM25 and evaluation in general. Further, figure 6 does not have a clear explanation of components labeled a, b, and c, while also missing a description in general. Finally, there is a typo in the appendix in the title for the Similarity based retrieval methods.\", \"[1] https://arxiv.org/abs/2404.05729\", \"[2] https://arxiv.org/abs/2406.15334\", \"[3] https://arxiv.org/abs/2205.05055\", \"[4] https://arxiv.org/abs/2404.07129\", \"[5] https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html\"], \"questions\": [\"While the paper presents an interesting direction within ICL with modular task vectors and shows better performance than zero-shot and classical ICL, further improvement of clarity, related work comparison, and validation dependency could strengthen the paper even more. I would recommend acceptance if the authors address these points and the questions within the weakness section and the following one:\", \"Have you tried aggregating task vectors by task, and if so, what were the results?\", \"Have you considered using multiple layers to represent task vectors instead of relying on a single optimal layer? If yes, how did this affect performance?\", \"Can you please clarify if only the most similar task vector is used for intervention in the end? If so, does this mean many task vectors remain unused?\", \"Can you include comparisons to related task vector approaches from previous ICL research you mentioned in the related work? Can you also compare your method against PEFT modular methods with a few-shot regime?\", \"How well does your approach scale to larger LLMs?\", \"Can ELICIT be extended for multi-modal applications or other, more complex tasks?\", \"Does ELICIT support compositionality, such that combining different task vectors can represent new tasks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We have submitted our revised manuscript along with additional experiments and responses to the questions. We kindly wanted to remind you, in case the notification was missed, and would greatly appreciate any updates on the responses. Thank you for your time!\"}", "{\"title\": \"A kind reminder for Reviewer yoPv\", \"comment\": \"Dear Reviewer yoPv,\\n\\nWe'd like to send a gentle reminder that we have submitted our rebuttal addressing your comments. We sincerely appreciate your review and thoughtful feedback, which has helped us improve our manuscript. \\n\\nWe are grateful that the other reviewers recognized the significance of our work, recommending acceptance with high scores of 8. We would appreciate the opportunity to discuss any remaining concerns and answer any further questions you may have.\\n\\nWe thank you again for taking the time to review our work.\\n\\nBest regards,\\n\\nAuthors of Paper 10183\"}", "{\"comment\": \"Thank you for taking the time to provide such constructive feedback and for recommending acceptance. Your insights and support mean a great deal to us.\"}", "{\"comment\": \"Thanks for the detailed rebuttal, addressing my concerns and providing additional clarifications.\\n\\nI appreciate the additional experiments on scaling, multilayer interventions, and aggregation. I particularly find interesting the experiments regarding the multilayer interventions and composionality. \\n\\nAfter reading the other reviews and rebuttals, as well as checking the revised manuscript, with its improved visuals, explanations, and supplementary materials, I believe the new version of the manuscript highlights the value of the proposed method and its potential impact in the community. \\n\\nI have decided to raise my score accordingly.\"}", "{\"summary\": \"This paper proposes ELICIT. It's a framework aims at improving LLMs capabilities by introducing an external ICL capacity library. This library stores task vectors, which represent in-context learned abilities, enabling models to retrieve relevant skills dynamically without additional training tokens or fine-tuning. The approach allows LLMs to handle diverse tasks by selectively activating specific capabilities when needed, thus improving both versatility and computational efficiency.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper comes up with an interesting and intuitive solution to improve LLMs' abilities using the task vectors.\\n2. Extensive experiments over various models and tasks.\\n3. Exprimental results show great advantage of the method over others.\\n4. This novel plug-and-play framework could benefit other methods on the same task.\", \"weaknesses\": \"1. I would expect the proposed ELICIT method to be integrated into more existing strategies such as few-shot learning.\\n2. Have you tried using the capacity bank with other creation method other than ICL?\", \"questions\": \"See weakness above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes ELICIT, a framework that stores the task vectors corresponding to different in-context-learning (ICL) prompts and dynamically augments the given question with retrieved task vectors to provide ICL abilities without explicitly forwarding the long ICL prompts. ELICIT shows comparable or better performance than ICL baselines on diverse tasks while being significantly more efficient.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The writing is clear and easy to follow. The motivation and design of each component is straightforward to understand.\\n2. Dynamically augmenting task vectors is significantly more efficient than in-context learning while showing competitive or even better performance.\\n3. The proposed approach can be applied to existing LLMs in a plug-and-play manner, making ELICIT easy to deploy.\", \"weaknesses\": \"1. Some details regarding the experiment setup need to be included. For example, the paper does not describe how the ICL prompts $p_i^{t}$ are chosen.\", \"questions\": \"1. How are the ICL prompt $p_i^{t}$ chosen? (e.g., Are they a group of randomly selected examples?) If there is a technique to maximize the diversity of the prompts in a given library, would it also boost the model performance?\\n2. Figure 5 shows that ELICIT boosts performance on relevant tasks while minimally compromising performance on non-related tasks. Is this because task vectors are not applied for unrelated tasks, or does the model perform well even when unrelated task vectors are given?\\n3. The current framework augments (at most) a single task vector at a single layer. Could this be extended to multiple vectors or layers to improve performance? (e.g., in cases where a given query is highly relevant to two different task vectors)\\n4. Tables 2 and 3 show that ELICIT performs significantly better than the ICL baselines for Pythia and Mamba. What would be the reason for this? For example, can ELICIT be more beneficial for models with weaker capabilities? Or could this be related to a specific training recipe, as Pythia and Mamba are trained under the same setup?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer BquA (1/3)\", \"comment\": \"Thank you for your thoughtful review and suggestions. We have incorporated additional experiments based on your feedback and revised the manuscript accordingly. Please let us know if you have any further questions or suggestions.\\n\\n**Weaknesses:**\\n\\nW1. Some details regarding the experiment setup need to be included. For example, the paper does not describe how the ICL prompts\\u00a0pit\\u00a0are chosen. \\n\\nSee [Q1](#q1).\\n\\n**Questions:**\\n\\n\\n> Q1: How are the ICL prompt\\u00a0pit\\u00a0chosen? (e.g., Are they a group of randomly selected examples?) If there is a technique to maximize the diversity of the prompts in a given library, would it also boost the model performance?\\n> \\n\\nThanks for your advice! We have clarified in Line 239 that the demonstrations in the ICL prompts are randomly selected. \\n\\nMaximizing the diversity of the prompts in a given library is a good idea! We conduct an additional experiment with embedding diversity, comparing random demonstration selection with diversity-optimized prompts as described in this [paper](https://arxiv.org/pdf/2209.01975). \\n\\n- **Comparison Methods**: we compare\\n - *Zero-shot*: zero-shot baseline\\n - *ELICIT*: original ELICIT implementation with randomly selected ICL demonstrations\\n - *ELICIT (diversity)*: modified ELICIT using the new capability library with diversity-optimized demonstrations.\\n- **Results:** As shown in Table X1, **the diversity-optimized prompts work well in some cases but not in others.** Compared to the original ELICIT, while performance improved in reasoning (+1.1%), math (+0.5%) and NLU tasks (+4.5%), there was a decline in Knowledge (-5.9%) and Safety (-2.3%) ability.\\n\\nThis result suggests the **potential** for future work to **improve our pipeline** by enhancing the quality of task vectors through better demonstration selection methods.\", \"table_x1\": \"The comparison of ELICIT using different capability library based on different ICL prompts. The experiments are conducted on Llama3-8B.\\n\\n| | **NLU** | **Reasoning** | **Knowledge** | **Math** | **Safety** | **Avg.** |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| Zero-shot | 32.2 \\u00b1 1.2 | 32.9 \\u00b1 0.2 | 42.5 \\u00b1 1.2 | 14.0 \\u00b1 1.0 | 35.5 \\u00b1 1.2 | 31.4 \\u00b1 0.7 |\\n| ELICIT | 38.1 \\u00b1 0.9 | 46.1 \\u00b1 0.3 | **60.7 \\u00b1 1.2** | 19.4 \\u00b1 1.1 | **49.4 \\u00b1 2.1** | **42.7 \\u00b1 0.8** |\\n| ELICIT (diversity) | **42.6 \\u00b1 0.3** | **47.2 \\u00b1 0.1** | 54.8 \\u00b1 1.5 | **19.9 \\u00b1 0.8** | 47.1 \\u00b1 2.6 | 42.3 \\u00b1 0.9 |\\n\\nWe have added these results in Appendix L.\"}", "{\"comment\": \"Thanks for the response, authors have partially addressed my concerns and left the rest for future explorations. I'll keep my score.\"}", "{\"title\": \"Respnose to Reviewer dv9u (4/4)\", \"comment\": \"> **Q4:** Can you include comparisons to related task vector approaches from previous ICL research you mentioned in the related work? Can you also compare your method against PEFT modular methods with a few-shot regime?\\n> \\n\\nsee W3.\\n\\n> **Q5:** How well does your approach scale to larger LLMs?\\n> \\n\\nThank you for your advice! We conduct additional experiments on Pythia-12B and Llama3-70B.\\n\\n- **Results**: The results are shown in the Table X7. We can find ELICIT **is also effective for larger models.**\\n\\nWe have added these results in Appendix K.\", \"table_x7\": \"ELICIT performance on Pythia-12B and Llama3-70B. Pythia-12B on three seed while Llama 70B using 1 seed.\\n\\n| | | **Length** | **nlu** | **reasoning** | **knowledge** | **math** | **safety** | **avg** |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| **Pythia-12B** | 16shots | 1598.2 \\u00b1 1.4 | 47.8 \\u00b1 1.2 | 15.8 \\u00b1 0.4 | 23.7 \\u00b1 0.3 | 14.3 \\u00b1 1.5 | 35.0 \\u00b1 1.9 | 27.3 \\u00b1 0.3 |\\n| | bm25 | 2184.2 \\u00b1 29.7 | 42.6 \\u00b1 1.4 | 20.3 \\u00b1 0.2 | 21.0 \\u00b1 0.4 | 14.0 \\u00b1 1.5 | 30.1 \\u00b1 0.8 | 25.6 \\u00b1 0.4 |\\n| | zs | 109.8 \\u00b1 1.5 | 34.7 \\u00b1 0.6 | 20.7 \\u00b1 0.2 | 18.1 \\u00b1 0.6 | 7.9 \\u00b1 1.7 | 34.6 \\u00b1 0.6 | 23.2 \\u00b1 0.2 |\\n| | ELICIT | 109.8 \\u00b1 1.5 | **38.5 \\u00b1 0.5** | **29.7 \\u00b1 0.7** | **29.8 \\u00b1 0.6** | **17.5 \\u00b1 2.1** | **46.8 \\u00b1 0.2** | **32.5 \\u00b1 0.5** |\\n| **Llama3-70B** | bm25 | 1823.1 | 42.8 | 78.1 | 80.9 | 49.2 | 71.5 | 64.5 |\\n| | 16shot | 1411.5 | 40.1 | 76.3 | 83.3 | 51.4 | 70.9 | 64.4 |\\n| | zs | 101.1 | 50.9 | 66.8 | 59.7 | 37.6 | 44.2 | 51.8 |\\n| | ELICIT | 101.1 | **55.9** | **80.5** | **84.6** | **52.4** | **67.4** | **68.2** |\\n\\n> **Q6:** Can ELICIT be extended for multi-modal applications or other, more complex tasks?\\n> \\n\\nThanks for your advice. Multi-modality would be a good extension for ELICIT! As ELICIT has no constraint on the modality, we believe it can be extended to multi-modal tasks. We will investigate this direction as future work.\\n\\nRegarding for handling more complex tasks, we experiment on GSM8K, which contains complex chain-of-thought generation and reasoning.\\n\\n- **Results:** as shown in Table X13, ELICIT can handle GSM8K.\", \"table_x13\": \"The results of ELICIT on GSM8K as in domain task based on Llama3-8B.\\n\\n| | gsm8k |\\n| --- | --- |\\n| **zs** | 30.44 |\\n| **ELICIT** | **32.44** |\\n| **16shot** | 42.87 |\\n| **16shot+ELCIT** | **43.22** |\\n\\nOur results can be found here and Appendix K in the paper.\\n\\n> **Q7:** Does ELICIT support compositionality, such that combining different task vectors can represent new tasks?\\n> \\n\\nIndeed, while we did not explicitly design for compositionality initially, **our approach demonstrates emergent compositional properties.** \\n\\n- **The success of out-of-domain task handling using multiple task vectors already exhibits a form of compositionality**. To illustrate this, we present a case study using Pythia-6.9B selecting top-20 task vectors, where demonstrating combining different task vectors successfully represent an unseen DeepMind input.\\n \\n| | |\\n| --- | --- |\\n| input | The following are multiple choice questions (with answers) about algebraic word problems. Finish your answer with 'X' where X is the correct letter choice.\\\\n\\\\nQuestion: A sales staff is composed of a sales manager and two sales people, all of whom earn commission as a percentage of sales. Each sales person earns 5% commission on sales. In a given week, the sales staff earned a total of 2,500 in commissions on 5,000 worth of sales. What commission rate did the sales manager earn during that week?\\\\nOptions:\\\\nA. 25%\\\\nB. 30%\\\\nC. 35%\\\\nD. 40%\\\\nE. 45%\\\\nAnswer: |\\n| chosen task vectors | 10 MathQA task vectors + 2 CommonsenseQA task vectors + 8 BBH Boolean Expression task vectors |\\n| Original Output | C |\\n| ELICIT Output | **D (correct)** |\\n| | |\\n\\nWe believe ELICIT can be designed to achieve greater compositionality, which is an interesting exploration.\"}", "{\"title\": \"Response Reviewer nS2i (1/1)\", \"comment\": \"Thank you for reviewing and appreciating. We have added the suggested experiments and will update the paper accordingly. Please let us know if you have any additional questions or concerns.\\n\\n**Weaknesses:**\\n\\n> W1. I would expect the proposed ELICIT method to be integrated into more existing strategies such as few-shot learning.\\n> \\n\\nExactly, We are not sure about the specific few-shot learning approach you mentioned. We conducted additional experiments using ELICIT augmenting 16-shot in-context learning on Pythia-6.9B:\\n\\n- **Results:** The results in Table X6 demonstrate that ELICIT achieves performance improvements even in few-shot settings.\\n\\nIf this wasn't what you meant, please let us know your specific thought about few-shot learning, and we'll be happy to verify it accordingly.\", \"table_x6\": \"ELICIT as an plug-and-play performance booster: Performance Gains When Combined with 16-shot ICL on In-Domain Tasks. The experiments are conducted on Pythia-6.9B.\\n\\n| | **Length** | **nlu** | **reasoning** | **knowledge** | **math** | **safety** | **avg** |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| **16shot** | 1595.6 | **46.9** | 23.8 | 22.2 | 10.9 | 34.3 | 27.7 |\\n| **16shot + ELICIT** | 1595.6 | 42.1 | **26.3** | **25.3** | **13.8** | **39.4** | **29.4** |\\n\\n> W2. Have you tried using the capacity bank with other creation method other than ICL?\\n> \\n\\nWe agree there exists more creation methods building capability library beyond ICL, such as prompt optimization. We haven't explored these approaches yet because our initial focus was on using ICL to investigate the possibility of enhancing LLMs' adaptive capabilities for arbitrary queries with minimal computational overhead. While we acknowledge that extending this work to other methods could be valuable, we consider it as an interesting direction for future research.\"}" ] }
CH7Ba4RFa2
Seg-LaneDet: 3D Lane Detection from Monocular Images with 2D Segmentation
[ "Yu Wang", "Yifan Jiao", "Tian He", "Zhenming Zhang", "weijie qiu", "Zhuqing Jiang" ]
Monocular 3D lane detection is a fundamental yet challenging task in autonomous driving. Recent advancements primarily rely on constructing 3D surrogates from monocular images and camera parameters. However, misalignment is introduced in current methods due to the lack of dense depth information in datasets, coupled with the inherent depth ambiguity of monocular images. To address this issue, we propose Seg-LaneDet, a simple but effective end-to-end 3D lane detector. We frame the task of 3D lane detection as an elevation from 2D to 3D detection. Specifically, we leverage a pre-trained 2D lane detector to obtain instance segmentation of lanes, of which the segmentation maps serve as the sole prior for the 2D-to-3D module. This allows us to achieve a straightforward 3D lane representation based on front-view segmentation maps. Our method demonstrates comparable performance to state-of-the-art (SOTA) F1 scores on the OpenLane and the Apollo datasets.
[ "3D Lane Detection", "Autonomous Driving", "Computer Vision" ]
https://openreview.net/pdf?id=CH7Ba4RFa2
https://openreview.net/forum?id=CH7Ba4RFa2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "fHqeTiqB99", "FLg5k4XTG8", "7m7RBQjfPi", "0rhZGUWDUb" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731565234030, 1729568220130, 1730536904756, 1730627464979 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9870/Authors" ], [ "ICLR.cc/2025/Conference/Submission9870/Reviewer_wvLa" ], [ "ICLR.cc/2025/Conference/Submission9870/Reviewer_KBFY" ], [ "ICLR.cc/2025/Conference/Submission9870/Reviewer_6yNZ" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes a two-stage method for 3D lane detection. In the first stage, a pre-trained 2D lane detector identifies lanes from front-view images. In the second stage, a U-shaped network lifts the detected lanes into the 3D scene by estimating their spatial positions. In addition to a point-level loss, the paper introduces lane-level and scene-level loss functions to enhance the accuracy and consistency of lane predictions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper leverages 2D lane segmentation maps to enable 3D lane detection by capturing detailed semantic information from the image. It further employs a U-shaped lifting module to estimate depth, transforming 2D segmented lanes into 3D space.\", \"weaknesses\": \"1. This paper lacks novelty, as the combination of 2D lane segmentation and depth estimation has already been proposed by ONCE-3DLanes [1]. Additionally, ONCE-3DLanes does not rely on pre-trained segmentation; instead, it jointly learns segmentation during the training process. This distinction makes the proposed method simpler and less comprehensive than ONCE-3DLanes. Furthermore, ONCE-3DLanes is neither adequately discussed in the related work section nor compared in the experimental results. A comparison is strongly recommended to properly position the contribution of the proposed method within the context of prior work.\\n\\n2. Although the proposed scene-level loss shows promising results in the ablation study (Table 5), it remains unclear how UV supervision benefits the U-shaped Lifting Module (ULM), whose primary objective is depth estimation. The connection between UV coordinate supervision and improved depth prediction needs further explanation.\\n\\n3. The definition of ASM is unclear. Could you please specify its inputs and outputs to clarify its role and functionality?\\n\\n[1]@InProceedings{yan2022once,\\ntitle={ONCE-3DLanes: Building Monocular 3D Lane Detection},\\nauthor={Yan, Fan and Nie, Ming and Cai, Xinyue and Han, Jianhua and Xu, Hang and Yang, Zhen and Ye,\\nChaoqiang and Fu, Yanwei and Bi Mi, Michael and Zhang, Li},\\nbooktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\\nyear={2022}\\n}\", \"questions\": \"1. Why adopt two-stage training instead of end-to-end training? If the first stage makes a prediction error, it cannot be corrected in the second stage. Moreover, since the segmentation results are also fed into the U-shaped Lifting Module (ULM), any inaccuracies in the segmentation might negatively impact the depth estimation performance.\\n\\n2. The description of ASM is unclear. In Line 210, it is mentioned that ASM is shown in Figure 3, but no such reference to ASM can be found in the figure.\\n\\n3.I guess from Line 213 that the input to ASM is a single row (line) of the 4C \\u00d7 H/16 \\u00d7 W/16 feature map, with the output having the same dimensions as the input. Is my understanding correct? Additionally, the meaning of absolute scale in your paper remains unclear\\u2014does it refer to, or is it equivalent to, depth?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work models 3D lane detection problem as a segmentation-and-lifting paradigm, which first utilize a 2D lane detector to produce 2D instance segmentation results, and then utilize a unet to lift the 2D results to 3D space. To achieve this goal, this paper introduce point-level, lane-level and scene-level loss to regulate the laneline learning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"I. The paper models 3D lane detection from 2Dsegmentation-and-lifting perspective, which is an intriguing direction for exploration.\\n\\nII. To learn a better 3D lane representation, this work introduces a hybrid loss to supervise model learning.\", \"weaknesses\": \"I. Unclear Notation:\\n\\n1. The notation $M_p$ in Figure 2 lacks an explanation. Provide a brief explanation for it can be helpful.\\n\\nII. Lack of Comparative Analysis:\\n\\nThe paper does not include a comparison with SALAD[2] (ONCE-3DLanes: Building Monocular 3D Lane Detection), which employs a related approach. Given SALAD's relevance, should consider including such a comparison, which is crucial to highlight the distinct contributions of Seg-LaneDet and to situate its effectiveness within the current body of work.\\n\\nIII. Limited Novelty:\\n\\nThe method\\u2019s novelty appears limited due to its reliance on a relatively straightforward UNet-like Module (ULM) for predicting 3D lane outputs. This approach, while practical, does not clearly stand out against recent, more innovative methods in 3D lane detection. This simplicity raises questions about whether it provides substantial advancements or improvements over existing techniques.\\n\\nIV. Subpar Performance:\\n\\nThe reported performance of the model does not consistently match that of prior state-of-the-art methods. Notably, the **error** is much higher compared to models like LATR, e.g., X-errors: LATR: [0.219, 0.259] versus This Work: [0.483, 0.850], Z-errors: LATR: [0.075, 0.104] versus This Work: [0.362, 0.745]. This raises concerns about the reliability and real-world applicability of Seg-LaneDet. \\n\\nV. Experiment Analysis:\\n\\nTable 2 shows that the model achieves better F1 scores in extreme weather and nighttime scenarios. The paper should provide an analysis of the reasons behind this performance, as it would clarify the proposed method\\u2019s effectiveness in challenging environments.\", \"questions\": \"I. The scene loss description (L265-268) is confusing and ambiguous. It states:\\n\\n > To mitigate projection error, ..., we establish a system of linear equations using the 3D points and their corresponding pixel coordinates from the ground truth.\\n\\n Specifically, are these pixel coordinates derived from a projection using camera parameters? if yes, what is the difference between this from a direct projection of 3D points using camera parameters? if not, then maybe should provide a breif explaination of how these \\\"accurate\\\" pixel coordinates, corresponding to the 3D points, are obtained. This clarification would help distinguish the introduced method from a direct projection approach.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Key Findings of the Paper\\n\\n1. **Seg-LaneDet Proposal**: The paper introduces Seg-LaneDet, a novel 3D lane detection framework for autonomous driving, based on 2D segmentation and designed for front-view, monocular camera images. This model effectively converts 2D lane detection results to 3D without relying on depth maps or complex 3D proxies.\\n2. **U-shaped Lifting Module (ULM)**: A central component of Seg-LaneDet, the ULM, lifts 2D data into a 3D space by incorporating both relative and absolute scale information. This approach aims to bypass misalignment challenges and enhance accuracy on varied road surfaces. This is the most valuable design in this paper to the community.\\n3. **Hierarchical Loss Function**: The paper introduces a multi-level loss function that improves point, lane, and scene-level accuracy, ultimately enhancing the model's performance on F1 scores. However, the model shows limitations in X/Z accuracy, a known constraint in segmentation-based approaches.\\n4. **Empirical Performance**: Tested on OpenLane and Apollo datasets, Seg-LaneDet achieved competitive F1 scores, notably excelling in challenging conditions like extreme weather and night scenes. It demonstrated improvements in efficiency and error handling over existing state-of-the-art methods but still showed performance limitations on precise X/Z error measurements.\\n5. **Ablation Studies**: These studies underscore the essential role of 2D segmentation maps and the benefits of adding positional embeddings. The hierarchical loss function and ULM significantly contribute to the system\\u2019s performance, validating the design choices.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Strengths\\n\\n1. **Innovative Approach**: The study introduces a unique method for 3D lane detection that relies on 2D segmentation maps instead of dense depth data, reducing complexity and computational cost. By leveraging 2D-to-3D lifting through a U-shaped Lifting Module (ULM), it provides a simpler yet effective approach that improves compatibility with monocular camera inputs, a highly cost-effective option for autonomous driving.\\n2. **Extensive Empirical Testing**: With testing on large, real-world datasets (OpenLane and Apollo) under varied conditions (night, extreme weather, curves, intersections), the study provides a thorough evaluation. The model\\u2019s competitive F1 scores validate its robustness across different scenarios, demonstrating resilience under complex driving conditions.\\n3. **Modular and Scalable Design**: The model\\u2019s reliance on existing pre-trained 2D lane detection modules allows for adaptability and potentially easier integration with new advances in 2D lane detection.\\n4. **Writing**: The writing is fluent and the figures are illustrative. The related work covers necessary related topics such as different lane methods and their pros and cons. And the figures are illustrative at an idea level, especially the visualization of the method pipeline.\", \"weaknesses\": \"Weaknesses\\n\\n1. **Dependency on Pre-trained 2D Detection**: The reliance on a pre-trained 2D lane detector means that Seg-LaneDet\\u2019s performance is tied to the quality of this prior module. Any inaccuracies in the 2D segmentation could propagate errors through the 3D lifting process, impacting overall reliability in situations where lane visibility is compromised, such as in heavy rain or sharp curves. Have you analyzed the impact of 2D detection errors on the final 3D output, or tested methods to make the 3D lifting process more robust to inaccuracies in the 2D segmentation?\\n2. **The advancement to SoTA LATR**: Although the proposed method is segmentation-based, it would be much better if the author can analyze what is the gap to the SoTA LATR and explain possible reasons on this. It would be great if there is any ablation studies or analyses to understand the key factors contributing to the performance gap between Seg-LaneDet and LATR.\\n3. **Some typos**: Table 5 has duplicate table names.\", \"questions\": \"Question\\n\\n1. Runtime efficiency of the proposed models. Since the model is in a two-stage fashion, I wonder if it will get slower compared to those one-stage models listed in the table 1 and 2. Also, any optimizations or explanation the authors have implemented to mitigate potential speed disadvantages of their two-stage approach would be appreciated.\\n2. What is the supposed meaning of the output in ULM module? The shape is (H, W, 3) and the author stated that this is XYZ. I am wondering its supervision.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
CGhgB8Kz8i
Innovative Thinking, Infinite Humor: Humor Research of Large Language Models through Structured Thought Leaps
[ "Han Wang", "Yilin Zhao", "Dian Li", "Xiaohan Wang", "sinbadliu", "Xuguang Lan", "Hui Wang" ]
Humor is previously regarded as a gift exclusive to humans for the following reasons. Humor is a culturally nuanced aspect of human language, presenting challenges for its understanding and generation. Humor generation necessitates a multi-hop reasoning process, with each hop founded on proper rationales. Although many studies, such as those related to GPT-o1, focus on logical reasoning with reflection and correction, they still fall short in humor generation. Due to the sparsity of the knowledge graph in creative thinking, it is arduous to achieve multi-hop reasoning. Consequently, in this paper, we propose a more robust framework for addressing the humor reasoning task, named LoL. LoL aims to inject external information to mitigate the sparsity of the knowledge graph, thereby enabling multi-hop reasoning. In the first stage of LoL, we put forward an automatic instruction-evolution method to incorporate the deeper and broader thinking processes underlying humor. Judgment-oriented instructions are devised to enhance the model's judgment capability, dynamically supplementing and updating the sparse knowledge graph. Subsequently, through reinforcement learning, the reasoning logic for each online-generated response is extracted using GPT-4o. In this process, external knowledge is re-introduced to aid the model in logical reasoning and the learning of human preferences. Finally, experimental results indicate that the combination of these two processes can enhance both the model's judgment ability and its generative capacity. These findings deepen our comprehension of the creative capabilities of large language models (LLMs) and offer approaches to boost LLMs' creative abilities for cross-domain innovative applications.
[ "Large Language Model", "humor generation", "reinforcement learning" ]
Accept (Poster)
https://openreview.net/pdf?id=CGhgB8Kz8i
https://openreview.net/forum?id=CGhgB8Kz8i
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ydpazNKyOR", "wQGK97DsAb", "pVGoNp7Q0u", "pD6qJ36aNi", "oCSrEK4vkD", "lvMx2GXJin", "lmVp6RGqHe", "iG2ay2uU7J", "hTLsOIWqr9", "cofCKMvz6E", "coOLgBEVW4", "ZIhVdzo2TI", "XPHfDqoqmd", "UEUDZmQryp", "TcSYNw6rIc", "SpTke2tyNL", "LStyg02Fv2", "LMudxju6Sw", "D6ScAJ7ulo", "CEeYNc9epf", "B8pp1n62Rc", "7KAZtOzjcr", "673JbNOnlN", "5MMMYU1ZGP", "4ynD6lmjGM", "4L8lawG7JX", "3vWucaqWVq", "30Cm7rQsps" ], "note_type": [ "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737523783903, 1734868549819, 1732535318816, 1733128987167, 1732374948481, 1732535251176, 1733129697911, 1732531893302, 1730589593616, 1733129639989, 1730813818890, 1732374644102, 1732375088105, 1732714080915, 1732870967939, 1729071675089, 1733065134553, 1732375323603, 1732475239264, 1732375096613, 1730592782506, 1733129664447, 1732870403728, 1732869427035, 1732374926106, 1732375271224, 1732535274791, 1732869388429 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6669/Area_Chair_gsN1" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Reviewer_Unwt" ], [ "ICLR.cc/2025/Conference/Submission6669/Reviewer_JsCo" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Reviewer_Unwt" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Reviewer_SKMr" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Reviewer_SKMr" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Area_Chair_gsN1" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Reviewer_gQyc" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ], [ "ICLR.cc/2025/Conference/Submission6669/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"The paper proposes a new framework called Creative Leap of Structured Thought (CLoST) to reinforce humor understanding by LLMs. The authors propose a systematic method inspired by KGs and causal relationships. The framework consists of two stages: Associative Automatic Instruction Evolution (AAIE) with human-designed instructions, and Guided Explorative Self-Improvement Tuning (GESIT) with RL, learning from both an expert model and its own judgements. Experiments conducted on English and Chinese humor datasets seemingly demonstrate that CLoST significantly outperforms existing models in humor discrimination and enhances the model's divergent thinking abilities.\", \"strengths\": \"Novel approach to humor generation in LLMs (Unwt, JsCo, SKMr)\\nUses structured thinking and knowledge graphs (SKMr, Unwt, JsCo)\\nShows improved performance on several humor benchmarks (Unwt, JsCo)\", \"weaknesses\": \"Reviewer (gQyc) found the method complex, with unclear explanations.\\nReviewer (SKMr) said the paper lacked human evaluation of generated humor (but this was addressed)\\n\\nBased on the scores (3, 5, 6, 6), the paper is just below the acceptance bar but one reviewer (gQyc) did not engage the authors' rebuttal, and another reviewer (SKMr) made unreasonable requests to release training data and code for review (rather than with the paper). I am proposing that the lower scores could be bumped by a notch, and the paper would meet the acceptance bar.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided extensive rebuttals with an additional experiment. Three reviewers (SKMr, Unwt, JsCo) engaged the authors, reviewer gQyc did not reply to the rebuttals.\\nReviewer SKMr lamented the fact that code and weights had not been released in time for the review, but this is not grounds for rejection, and the authors make a valid point about checking training data for copyright.\"}", "{\"comment\": \"Dear Reviewer SKMr,\\n\\nI hope this message find you well. We have carefully considered your feedback and have made significant improvements to the manuscript. We truly value your insights, and your expertise has greatly contributed to enhancing the quality of our work. Could you please let us know if the revisions meet your expectations? As the deadline for discussion nears, we kindly ask if you could review our updated paper. We are eager to address any additional questions that may arise. Thank you for your invaluable support and consideration.\\n\\nSincerely, \\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewers,\\n\\nSince QwQ is trending in community and performing surprisingly well on various reasoning benchmarks, we included QwQ in humor related benchmarks for comparison. The results show that although QwQ indicates its strong ability on math and code, the paradigm for improve general reasoning ability might not generalize well in humor related tasks, which involves creative thinking and reasoning with a broad span of thought. In fact we are the first to propose a systematic paradigm that fits creative thinking tasks well, and we thereby call for attention from the whole community to co-build creative thinking AI applications that was previously recognized as a gift for human only.\\n\\nBesr Regards,\\n\\nAuthors\"}", "{\"comment\": \"4. Experiments.\\n\\n(a) Regarding the experiments on generation, we conduct Divergent Association Task to show the model's imagination capability in Figure 5 (we modify it in line 411 of revised paper). In this experiment, we provided specific words and asked the model to generate associations and imaginations, resulting in 10 associated words. We then used these 11 words to calculate the DAT score where semantic distances are computed. **The results show that CLoST gains the best performance on divergent association thinking.**\\nFor humor generation, there are some showcases in Appendix (we modify it in line 912 of revised paper). All showcases show that CLoST could be more human-like and short which leaves more room for imagination.\\n\\n(b)We conduct a human evaluation to validate CLoST's performance in humor generation (we modify it in line 430 of revised paper).\\nWe choose the first 200 samples in the validation split of the Ruozhiba dataset\\\\footnote{https://github.com/Leymore/ruozhiba/tree/main?tab=readme-ov-file} and the method mentioned above is to make the query into questin-answer pair. Then four LLMs generate responses to each question as four options.\\nThen, we conduct a user preference study to directly verify the creativity of the LLMs. We present question and several corresponding replies, and ask users to choose the most creative and humorous responses. We select four advanced LLMs to generate responses for a total of 200 questions, and the four responses from the four distinct LLMs are randomly permuted in the options. We conduct an extensive survey through an online survey platform\\\\footnote{https://www.wjx.cn/}, ultimately collecting 15 valid questionnaires with 3000 votes. Within these collected questionnaires, we calculate the proportion of times each LLM is selected for each question. Finally, we aggregate the total number of times each LLM is chosen across all validation samples, as shown in Figure 6(c). The ratio of this sum to the overall number of selections among all LLMs signifies the user preference for each LLM. Also we calculate the win rate based on the dimension of the problem as shown in Figure 6(b).\\n\\n(c) We appreciate your concerns about fairness, but as an admittedly best LLM, GPT-4o is inevitably used as a baseline for comparison. \\nTo allay your concerns, we supplemented an experiment on the Ruozhiba dataset, which most well-known LLMs have been trained on. We asked GPT-4o to rewrite the Ruozhiba query into a question and answer pair, placing the punchline in the answer section. Then, we asked GPT-4o again to rewrite the ground-truth answer into a non-humorous version. Based on the positive-negative pair data, LLMs were tested, and the results are shown in Table 9. **The results show that CLoST also realizes state-of-the-art performance on Ruozhiba dataset.** (we modify it in line 1034 of revised paper).\\n\\n(d) We conduct ablation study about judgement performance in Table 3 and 4 (we modify it in line 379and 447 of revised paper). **In Table 3, row 5-6 shows that model trained with AAIE realizes a noticeable improvement on English benchmarks.** **Row 3-4 in Table 4 also shows the increasing of performance on Chinese benchmark.** And ablation study on Divergent Association Thinking test is conducted in Figure 5(b) (we modify it in line 411 of revised paper). which shows that **AAIE enhance the divergent associate thinking ability.**\"}", "{\"comment\": \"Dear Reviewer gQyc,\\n\\nI hope this message find you well. We have carefully considered your feedback and have made significant improvements to the manuscript. We truly value your insights, and your expertise has greatly contributed to enhancing the quality of our work. Could you please let us know if the revisions meet your expectations? \\u00a0As the deadline for discussion nears, we kindly ask if you could review our updated paper. We are eager to address any additional questions that may arise.\\nThank you for your invaluable support and consideration.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"3. Comparison on English Benchmarks.\\n\\n| Model | SemEval2020| Origin-GO-en | SemEval 2021 2T1 |SemEval 2021 2T1(hard) |SemEval 2021 3T1 |SemEval 2021 4T1 |\\n| :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: |\\n|GPT-4o | 55.08 | 85.09| 85.09 | 60.77| 43.71 | 34.63 |\\n|LLAMA3-8B| 59.93 | 72.05| 43.85 | 54.23 | 39.81 | 29.57 | \\n|LLAMA3-70B| 60.73 | 88.51| 93.60 |58.08|39.81| 31.82 | \\n|QWEN1.5-7B| 50.46 | 36.65 | 62.04 | 52.02 | 31.54 | 24.89 | \\n|QWEN1.5-14B| 50.38 | 53.73| 82.05 | 51.04 | 30.92 | 24.24 | \\n|QWEN1.5-32B| 56.39 | 68.01| 68.01 | 52.57 | 35.38 | 28.79 | \\n|QWEN2-7B| 50.08 | 62.11 | 56.55 | 50.63 | 32.31 | 23.38 | \\n|QWEN2-57B| 48.29 | 48.14 | 83.30 | 52.02 | 37.08 | 28.79 | \\n|QWEN2.5-32B| 58.71 | 81.68 | 94.00 | 55.22 | 34.77 | 27.92 |\\n|Baichuan2-13B| 51.45 | 50.00 | 51.70 | 52.71 | 35.69 | 24.24 |\\n|CLoT- 7B |53.50 | 52.49| 52.49 | 51.74 | 34.46 | 23.59| \\n| **QwQ** |**56.58**| **59.63**| **80.05** |**53.06** |**33.49** |**24.66** |\\n| **CLoST** |**64.57**| **97.20** | **96.58** | **57.45** | **48.06** | **35.90** |\\n\\nWe collect data from varied humor generation games such as Origi-GO. In these games, a question gains many responses from human and these responses are ranking by human voting. We select response with different votes to construct choices. A simple case consists of responses with voting numbers that differ significantly, while a hard case consists of responses with voting numbers that differ marginally. And with increase of options' number, the votes number difference between options will shrink. Finally, we clarify that we randomly shuffle the options in training and validate sets. CLoST also is the SOTA method on these English benchmarks.\"}", "{\"comment\": \"Thank you for your reply and the additional experimental results. My concerns have been resolved.\"}", "{\"summary\": \"This paper introduces CLoST (Creative Leap of Structured Thought), a novel framework for enhancing humor generation and understanding in large language models (LLMs). The work builds upon the Creative Leap-of-Thought paradigm with two key innovations:\\n1.Automatic Associative Instruction Evolution (AAIE): A multi-agent system for evolving complex instruction sets\\n2.Guided Exploratory Self-Improvement Tuning (GESIT): A preference optimization approach incorporating causal reasoning\\nThe paper demonstrates improved performance on both Chinese and English humor benchmarks, including SemEval tasks and Oogiri-GO dataset. The framework shows particular strength in handling complex humor discrimination tasks and generating creative responses.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.The paper proposes a novel integration of structured reasoning with creative creation.\\n2.The paper proposes an innovative multi-agent instruction evolving system and incorporates the causal inference to humor understanding.\\n3.The proposed two stage approach shows clear performance improvement through comprehensive empirical evaluation across multiple datasets and languages.\", \"weaknesses\": \"1.The experimental validation lack of human evaluation for generated humor quality.\\n2.The three-agent system is not well motivated and can be better presented with motivation, clear example and details.\\n3.Better exploration and analysis of failure cases\", \"questions\": \"1.The Associative Automatic Instruction Evolution part is a bit hard to understand and can be presented with a short and clear example to go through. It\\u2019s confused to interpret the symbols.\\n2.For the AAIE process, what\\u2019s the output data format used to train a lora model?\\n3.For the guided explorative self-improvement tuning, why do we need r+ and r-? do we use them in the fine-tuning?\\n4.In Table 1, it\\u2019s suprised to see gpt 4o performs worse than Llama3 on SemEval 2020 and Oogiri-GO.\\n5.How stable is the three-agent system? Are there cases where it fails to converge?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"1. Comparison on Chinese benchmarks.\\n\\n| Model |easy|hard|\\n| :-----------: | :-----------: | :-----------: |\\n|GPT-4o | 64.98 | 63.49 |\\n|LLAMA3-8B | 50.72 | 57.44| \\n|LLAMA3-70B| 59.48 | 61.22|\\n|QWEN1.5-7B|54.82 | 51.71 |\\n|QWEN1.5-14B|53.45 | 57.41|\\n|QWEN1.5-32B|52.71 | 56.27 |\\n|QWEN2-7B| 51.99 | 58.17 |\\n|QWEN2-57B| 65.91 | 57.03 |\\n|QWEN2.5-32B| 61.53 | 60.46| \\n|Baichuan2-13B| 50.56 | 53.61| \\n|CLoT-7B |52.12 | 34.46 |\\n|**QwQ-32B** | **59.75** | **57.04**| \\n|**CLoST-32B**|**90.95** | **69.97**|\\n\\nIn the benchmark, a question gains many responses from human and these responses are ranking by human voting. We select response with different votes to construct choices. A simple case consists of responses with voting numbers that differ significantly, while a hard case consists of responses with voting numbers that differ marginally. \\nThe results show that CLoST still gain the state-of-the-art performance.\"}", "{\"summary\": \"This paper introduces the Creative Leap of Structured Thought (CLoST) framework, which enhances large language models' ability to generate and recognize humor through structured thinking and self-improvement. CLoST introduces a two-stage approach: first, using Associative Automatic Instruction Evolution (AAIE) to diversify and refine humor judgment through complex instructions, and second, employing Guided Explorative Self-Improvement Tuning (GESIT) to strengthen logical reasoning and humor understanding via reinforcement learning. CLoST improves humor judgment and creativity across multiple language benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper presents the CLoST framework, a novel methodology for generating humor in LLMs. This structured approach integrates knowledge graphs and causal inference, which helps reinforce logical connections between seemingly unrelated concepts, thus facilitating more coherent humor generation. Notably, the research delves deeply into data augmentation techniques, enhancing the model\\u2019s ability to generate diverse and contextually relevant humorous responses.\", \"The authors conduct extensive testing of CLoST across multiple humor benchmarks. These experiments demonstrate that CLoST consistently outperforms existing humor generation models in terms of accuracy and robustness. The model\\u2019s performance improvements are particularly pronounced in both English and Chinese language settings.\"], \"weaknesses\": [\"The inherent subjectivity of humor presents a major challenge. The current approach seems to emphasize pattern recognition, where the model identifies humor based on learned patterns rather than genuinely understanding or reasoning through the humor's intricacies. The empirical results, especially on hard humor tasks, underscore this shortcoming.\", \"The Associative Automatic Instruction Evolution (AAIE) method employs a multi-agent system\\u2014comprising a rewriter, imaginator, and analyst\\u2014to iteratively evolve instructions. While this approach is novel and showcases creative engineering, it introduces considerable computational overhead. More critically, the ablation study results suggest that the added complexity yields only marginal improvements in performance. Specifically, CLoST shows gains in four out of seven tasks, and even these are not substantial enough to justify the extensive computational resources required.\"], \"questions\": \"1. Is the inclusion of AAIE necessary? The performance improvement does not seem substantial, as shown in Table 3. It would be beneficial to highlight the best-performing models in the table and provide a detailed analysis of possible reasons.\\n2. The paper mentions that the dataset includes varying difficulty levels. Is this variation primarily reflected in the choice options, such as 2-out-of-1 or 4-out-of-1 selections? It would be helpful to include a more in-depth analysis of how these different difficulty levels affect model performance.\\n3. There are minor issues, such as the use of incorrect quotation marks in LaTeX. Additionally, providing more details on experimental parameters, like the temperature setting used, would enhance the clarity and reproducibility of the experiments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Because the field of jokes is quite new, we appreciate your feedback so that we can better express our innovation.\\n\\n1. Question about not breaking the shackles of pattern recognition.\\n\\n And it is true that pattern recognition is an unavoidable for most work related to deep neural networks. As for the process of humor generation, human preference is an unavoidable problem, but beyond human preference, humor generation has its reasoning logic. The highlight of our method is finding a way to accomplish such a special reasoning process. \\n\\nHumor reasoning is a multi-hop process, and each hop is based on external knowledge injection and proper rationales. Without them, it is difficult for the model to understand the internal humorous logic, making it prone to pattern recognition. Therefore, AAIE is proposed to inject and augment knowledge into the original training data. It will be facilitative for LLMs to understand the underlying logic and rationale. \\nAnd then in the stage of GESIT, the reasoning logic for each online generated response is extracted using GPT-4o. In this process, external knowledge is introduced again to assist the model in logic reasoning and human preferences learning.\\nExperimental results demonstrate that the combination of these two methods can both enhance the model's judgment ability and improve its generative capability.\\n\\nFor example, there is a conversation, \\u201cWhat do you think of Aeroflot?\\u201d \\u201cI was on this airline once, and arrived an hour early, and finally I have been divorced three years.\\u201d. Aeroflot has nothing to do with divorce seemingly. But they could be related by this way, \\\"Aeroflot $\\\\rightarrow$ Well-known feature is fast $\\\\rightarrow$ It could arrive an hour early $\\\\rightarrow$ It leads to trip disrupted $\\\\rightarrow$ So something to ruin the marriage was found $\\\\rightarrow$ Finally the responder is divorced.\\\" From the example, we can conclude that there is a sparse knowledge graph between question and answer. The key of making a creative leap is to mitigate the information insufficiency issue. To realize it, the injection of knowledge beyond the literal is necessary to reasoning, which is the goal of CLoST.\\n\\n2. AAIE's necessity, and ablation experiments on AAIE and analysis.\\n\\nWe supplement ablation study in Table 3 and 4 (we modify it in line 379, 447 of revised paper).. Row 5-6 shows that model trained with AAIE realizes a noticeable improvement. Row 1 in Table 3 is the performance of QWEN-1.5-32B. Row 2-4 show the performance of gradually adding method on Oogiri-GO-en dataset. The results show that with the increase of tasks, especially in the teacher-student system, the performance of judgment is improved. AAIE with Oogiri-GO-en only makes the performance decrease little. It may be cause by overfitting to the divergence of thought on Oogiri-GO-en. \\nIn addition, the DAT test is conducted on ablation study in Figure 5(b), which shows that AAIE enhance the divergent associate thinking ability.\\n\\n\\n3. Experiments set up on selection and parameters set up.\\n\\nThanks for your suggestion. With respect to the multiple-choices questions, there are varying difficulty levels in the dataset, and its variety is primarily reflected in the choice options.\\nWe collect data from varied humor generation games such as Origi-GO. In these games, a question gains many responses from human and these responses are ranking by human voting. We select response with different votes to construct choices. A simple case consists of responses with voting numbers that differ significantly, while a hard case consists of responses with voting numbers that differ marginally. And with increase of options' number, the votes number difference between options will shrink. Finally, we clarify that we randomly shuffle the options in training and validate sets.\\n\\nTraining pipeline details are supplemented in Appendix in (we modify it in line 783 of revised paper). and Experiments details used in generation are list in Table 9 (we modify it in line 791 of revised paper).\\nMore parameters used in generation are list in Table 9.\\n\\n4. Quotation error.\\n\\nI am so sorry for the mistakes and I modify them in the revised version.\"}", "{\"comment\": \"Because the field of jokes is quite new, we appreciate your feedback so that we can better express our innovation.\\n\\n1. AAIE's detail like motivation, examples, output, stability .\\n\\n**Humor reasoning is a multi-hop process, and each hop is based on external knowledge injection and proper rationales. Without them, it is difficult for the model to understand the internal humorous logic, making it prone to pattern recognition.** Therefore, **AAIE is proposed to inject and augment knowledge into the original training data. It will be facilitative for LLMs to understand the underlying logic and rationale.**\\nAnd then **in the stage of GESIT, the reasoning logic for each online generated response is extracted** using GPT-4o. In this process, **external knowledge is introduced again to assist the model in logic reasoning and human preferences learning.**\\nExperimental results demonstrate that the combination of these two methods can both enhance the model's judgment ability and improve its generative capability.\\n\\nFor example, there is a conversation, \\u201cWhat do you think of Aeroflot?\\u201d \\u201cI was on this airline once, and arrived an hour early, and finally I have been divorced three years.\\u201d. Aeroflot has nothing to do with divorce seemingly. But they could be related by this way, \\\"Aeroflot $\\\\rightarrow$ Well-known feature is fast $\\\\rightarrow$ It could arrive an hour early $\\\\rightarrow$ It leads to trip disrupted $\\\\rightarrow$ So something to ruin the marriage was found $\\\\rightarrow$ Finally the responder is divorced.\\\" From the example, we can conclude that there is a sparse knowledge graph between question and answer. **The key of making a creative leap is to mitigate the information insufficiency issue. To realize it, the injection of knowledge beyond the literal is necessary to reasoning, which is the goal of CLoST.**\", \"the_difficulty_in_aaie_process_can_be_concluded_in_two_points\": \"1. How to deepen and broaden understanding step by step. 2. How to end the thought process. For example in Table 10-15, a simple prompt is difficult to stimulate the understanding of the model like row 1. AAIE utilize LLMs' world knowledge to gradually make the generated prompt contain more information as shown in Table 12 row 1 to 3, simulating the process of human gradual in-depth thinking. So the answers from the two prompts are different like Table 13.\\nAdditionally, imaginator tries to revivification the conversation based on the informative prompt's answer as shown in Table 10 row 1-2. It makes a shift on story line. So it guides the entire system to explore the boundaries of the conversation, which makes the understanding broader. Finally, If the punchline or core idea of imagined conversations is completely deviated from the original, the system will stop injecting new knowledge, or the maximum step limit is reached.\\n\\n**The output of AAIE consists of multi-turn question-answer pairs** that include the original conversation, the imagined conversation, and question-answer data of each evolution step.\\nFurther, some concepts were originally not in the context, but through this method, they have been added to the context. Thus it is easier for LLM model to understand a humor's rationale.\\n\\n2. For the guided explorative self-improvement tuning, why do we need r+ and r-? do we use them in the fine-tuning? \\n\\n**The $r^+$ and $r^-$ for online DPO training are helpful to enhance the humorous reasoning path and weaken inhumorous reasoning path.** It is helpful to simulate the human thinking process. Therefore, $r^+$ and $r^-$ is necessary to realize the reasoning process. We give an example of $r$ in Figure 12 in Appendix (we modify it in line 886 of revised paper)..\\n\\nIn addition, we do not use it in the fist stage i.e. the fine-tuning. Because it is more efficient to train model's preference on DPO training stage.\\n\\n\\n3. Failure showcase and analysis.\\n\\nThere are some comparative data. Column 2 is the response from CLoST and Column 3 is a better answer by human rating (we modify it in line 1016 of revised paper).\\n \\nThe creativity is uneven, and the creativity shown in the samples of training dataset varies widely. \\nFor example, in case 1, the response from CLoST is an internet meme and the response from better one using a Chinese proverb to compare.\\nIn terms of creativity, the latter have broader hands. In case 2, a shorter and antithetical answer is more witty.\\nThere are no failure examples, only in some people's preferences, it is not humorous.\"}", "{\"title\": \"Review Update\", \"comment\": \"Dear authors, I have updated my score to reflect your response to my concerns. However, I cannot go above 5 since the code was still not disclosed and the dataset will remain private, a matter that is of utmost importance for open and reproducible research.\"}", "{\"comment\": \"Dear Area Chair gsN1,\\n\\nTo facilitate the reviewers' experience and verification of the creativity of the model, our weights and inference code are currently ready and prepared to be made public. However, due to the double-blind reason, we prepared an anonymous GitHub repository but encountered file size limit when committing model weights to GitHub. Since there is no anonymous mode on huggingface, which way would you recommend us sharing the materials while following the double blind policy?\\n\\nSincerely, \\n\\nAuthors\"}", "{\"summary\": \"The paper introduces a framework called Creative Leap of Structured Thought (CLoST) to reinforce humor generation in LLMs. Building upon the limitations of the CLoT paradigm, the authors propose a systematic method inspired by KGs and causal relationships. The framework consists of two stages: Associative Automatic Instruction Evolution (AAIE) and Guided Explorative Self-Improvement Tuning (GESIT). In the first, the model is trained with human-designed instructions to improve its humor-judgement capabilities. In GESIT, the model refines its ability to generate humorous responses through RL, learning from both an expert model and its own judgements. Experiments conducted on English and Chinese humor datasets seemingly demonstrate that CLoST outperforms existing models in humor discrimination and enhances the model's divergent thinking abilities.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper addresses humor generation, a task that is famously challenging, introducing a framework that uses causal relationships to model associations between concepts. Furthermore, a two-stage training process is a good approach, separating the judgement part from the generation part in LLMs.\", \"weaknesses\": \"Firstly, the paper lacks clarity in several sections, making it difficult to fully comprehend the proposed methods. For instance, the mathematical formulation in S2.1 seems disconnected from the rest of the paper, and is not well-explained. Secondly, there is insufficient detail about the datasets used, especially the \\\"in-house data\\\". The lack of information about data sources and availability (and code!) raises concerns about the reproducibility of the experiments. Then, the evaluation metrics and experimental setup are not discussed enough. It is unclear in my mind how the multiple-choice questions are constructed and whether the comparisons with baseline models are fair (e.g., are other models trained on the same datasets? If not, than it's a problem).\\nI will state here, since it's a problem of many papers in this field: the reliance on GPT-4o as an expert model is problematic, as it is a proprietary system. This raises issues regarding the accessibility and reproducibility of your method. Proprietary system's backends change with time and without alerting the users: they are not fit for experiments that need to keep reproducibility to the forefront.\\nFurthermore, let humans evaluate the generated humor. A good sample of annotators evaluating whether the produced content it is \\\"funny\\\" or not would be a better fit and a real contribution to the field.\\nFinally, the paper does not address ethical considerations related to HG, such as avoiding offensive or culturally insensitive content.\", \"questions\": [\"Can you provide more information about the \\\"in-house data\\\" used for training and testing? What are its sources, and is it possible to make this data publicly available to ensure reproducibility?\", \"How are the multiple-choice questions in the evaluation constructed? Are they standardized across all models, and how do you control for randomness in the selection of options?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer gQyc,\\n\\nWe deeply appreciate the time and effort you have invested in reviewing our paper. We have thoroughly addressed your valuable comments and made the necessary revisions. Could you kindly re-evaluate our manuscript at your earliest convenience? We are more than willing to discuss any remaining concerns you might have.\\n\\nThank you for your understanding and cooperation.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"4. Experiments.\\n\\nWe appreciate your concerns about fairness, but as an admittedly best LLM, GPT-4o is inevitably used as a baseline for comparison. \\nTo allay your concerns, we supplemented an experiment on the Ruozhiba dataset, which most well-known LLMs have been trained on. We asked GPT-4o to rewrite the Ruozhiba query into a question and answer pair, placing the punchline in the answer section. Then, we asked GPT-4o again to rewrite the ground-truth answer into a non-humorous version. Based on the positive-negative pair data, LLMs were tested, and the results are shown in Table 9 (we modify it in line 1034 of revised paper). **The results show that CLoST also realizes state-of-the-art performance on Ruozhiba dataset.**\\n\\nWe conduct a human evaluation to validate CLoST's performance in humor generation (\\\\textcolor{blue}{line 430}). \\nWe choose the first 200 samples in the validation split of the Ruozhiba dataset\\\\footnote{https://github.com/Leymore/ruozhiba/tree/main?tab=readme-ov-file} and the method mentioned above is to make the query into questin-answer pair. Then four LLMs generate responses to each question as four options.\\nThen, we conduct a user preference study to directly verify the creativity of the LLMs. We present question and several corresponding replies, and ask users to choose the most creative and humorous responses. We select four advanced LLMs to generate responses for a total of 200 questions, and the four responses from the four distinct LLMs are randomly permuted in the options. We conduct an extensive survey through an online survey platform\\\\footnote{https://www.wjx.cn/}, ultimately collecting 15 valid questionnaires with 3000 votes. Within these collected questionnaires, we calculate the proportion of times each LLM is selected for each question. Finally, we aggregate the total number of times each LLM is chosen across all validation samples, as shown in Figure 6(c). The ratio of this sum to the overall number of selections among all LLMs signifies the user preference for each LLM. Also we calculate the win rate based on the dimension of the problem as shown in Figure 6(b).\\n\\nTraining pipeline (we modify it in line 783 of revised paper):\\nCLoST takes a two-stage training strategy. In the first process (supervised fine-tuning (SFT)), we randomly initialize a LoRA model. And we train the model with single-turn question-answer format data (data from Figure 2(a)(b)(c)) and muti-turn question-answer format data (data from Figure 2(d) and AAIE). In the second process (Direct Preference Learning (DPO)), the first stage model serves as ref model, and it is frozen as judgement model. The tunable model is trained to improve the reasoning generation capability. At the beginning of stage 2, only preference question-answer data without rationale is fed into the tunable model for training. After several steps, the rationale for each online generated response is extracted using GPT-4o and the preference question-answer data with rationles are mixed into the original dataset. And in each batch, the ratio of 'w' and 'w/o' rationale is $1 : 1$. \\n\\nExperiments details (we modify it in line 792 of revised paper).:\\nOur model is fine-tuned based on QWEN1.5-32B-Chat with fine-tuning method LoRA on 8 A100 GPUs. For the first stage, we train the model on the $95\\\\%$ of dataset mentioned above for 6 epochs with AdamW optimizer and the learning rate of $3e-4$. In the second stage, $5\\\\%$ of the dataset are used to train GESIT for 3 epochs with AdamW optimizer and the learning rate of $2e-4$. The models are tested in the tasks introduced in previous part. And the parameters used in generation are list in Table 5.\\n\\nThis article solves the problem that the answer from most LLMs is not humorous and creative enough. In the future, in order to make it more widely used, we will consider the HG problem and offensive or culturally insensitive content you raised.\"}", "{\"title\": \"Public discussion phase ending soon\", \"comment\": \"Dear reviewers,\\n\\nThank you for your diligent work on the reviews. Currently the paper has very split scores: 3, 3, 6, 6, and the authors have responded to every single one of the reviews.\", \"all_reviewers\": \"did the authors' rebuttals and other reviews affect your score? Please respond before the 26 November to let the authors have time to respond if you need any clarification. Thank you!\\n\\nYour AC\"}", "{\"comment\": \"4.Human evaluation for generated humor quality.\\n\\nThanks a lot for your suggestion. We choose the first 200 samples in validate split of Ruozhiba dataset\\\\footnote{https://github.com/Leymore/ruozhiba/tree/main?tab=readme-ov-file}. And we call GPT-4o to rewrite the Ruozhiba query into a question and answer pair, placing the punchline in the answer section. Then four LLMs generate responses to each question as four options.\\nAnd Then we conduct a user preference study to directly verify the creativity of LLMs. We present question and several corresponding replies, and ask users to choose the most creative and humorous responses as shown in . Here we select four advanced LLMs to generate responses for a total of 200 questions, and the four responses from four distinct LLMs are randomly permuted in options.\\nWe conduct an extensive survey through the online survey platform\\\\footnote{https://www.wjx.cn/}, ultimately collecting 15 valid questionnaires with 3000 votes. Within these collected questionnaires, we can calculate the proportion of times each LLM is selected for each question. Finally, we aggregate the total number of times each LLM is chosen across all validate samples as shown in Figure 6(c). The ratio of this sum to the overall number of selections among all LLMs signifies the user preference for each LLM. Also we calculate the win rate based on the dimension of the problem as shown in Figure 6(b). (we modify it in line 430 of revised paper).\"}", "{\"summary\": \"This paper introduces the Creative Leap of Structured Thought (CLoST) framework, aiming to improve the humor understanding capabilities of LLMs. It consists of two stages: Associative Automatic Instruction Evolution (AAIE) and Guided Exploratory Self-Improvement Tuning (GESIT). In AAIE, human-designed instructions and automated instruction evolution help the model develop humor understanding capabilities. In GESIT, the model\\u2019s humor generation is improved using bootstrap Direct Preference Optimization (DPO) training. By learning from expert-provided joke rationales and AAIE-trained judgment skills, the model iteratively refines its humorous responses through guided reasoning.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"This paper introduces the Creative Leap of Structured Thought (CLoST) framework, aiming to improve the humor understanding capabilities of LLMs. It consists of two stages: Associative Automatic Instruction Evolution (AAIE) and Guided Exploratory Self-Improvement Tuning (GESIT).\", \"weaknesses\": \"1. This paper is very difficult to read and understand. Comes with a seemingly fancy title and introductory paragraphs not closely related with the goal.\\n\\nIt claims \\\"humor generative abilities\\\" in the intro and the teaser figure, half of the method is for improving humor judgement ability and the second half is developed for improving generation abilities. But the test tasks are multiple choice questions of SemEval (humor classification/discrimination) and has nothing to do with generation. \\n\\nThe motivation and method contains lots of unclear expressions. CLoT, the method that it is based on, is not explained clearly. As a reader I am so confused with this paper that I have to rely on AI to help me understand this paper better. Even the figure is confusing. For example in Figure 1, how come that a response by CLoST \\\"Job? No No?\\\" is more humorous and interesting than more informative ones such as \\\"Of course not! If you are satisfied with your job, you're already one step ahead of the person who invented the snooze button\\\" by GPT-4o?\\n\\n2. The authors seems to lack sufficient background knowledge of NLP&humor basic concepts and existing works. Multiple-choice questions is not a faithful title/motivation to \\\"humor generation abilities\\\".\\n\\n3. Experiments: The compared baselines are zero-shot LLMs such as GPT, QWEN, etc. The only prior work being compared is CLoT which is not discussed in detail and lacks any ablation of the introduced components.\", \"questions\": \"/\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"2. Comparison on Ruozhiba dataset.\\n\\n| **CLoST** | GPT-4o | QWEN1.5-32B | CLoT | **QwQ** |\\n| :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | \\n|**95.35**| 76.40 | 68.85 | 50.20 | **90.80** |\\n\\nwe supplemented an experiment on the Ruozhiba dataset, which most well-known LLMs have been trained on. We asked GPT-4o to rewrite the Ruozhiba query into a question and answer pair, placing the punchline in the answer section. Then, we asked GPT-4o again to rewrite the ground-truth answer into a non-humorous version. Based on the positive-negative pair data, LLMs were tested, and the results are shown in Table 9 (we modify it in line 1034 of revised paper). The results show that CLoST also realizes state-of-the-art performance on Ruozhiba dataset.\"}", "{\"comment\": \"Dear Reviewer SKMr,\\n\\nTo make it easier for the reviewers to verify the validity of the method, we are inquiring of AC whether there are any means to anonymously share our model weights and inference evaluation code with the reviewers.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer JsCo,\\n\\nWe sincerely appreciate your suggestions so that we can make great improvements in the revised version. And we have the same expectation as you that this innovative work will serve as a milestone for future work and unleash the potential for LLMs in creative thinking. \\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"Because the field of jokes is quite new, we appreciate your feedback so that we can better express our innovation.\\n\\n1. The motivation is not clear. \\n\\n**Humor reasoning is a multi-hop process, and each hop is based on external knowledge injection and proper rationales. Without them, it is difficult for the model to understand the internal humorous logic, making it prone to pattern recognition.** Therefore, **AAIE is proposed to inject and augment knowledge into the original training data. It will be facilitative for LLMs to understand the underlying logic and rationale**.\\nAnd then **in the stage of GESIT, the reasoning logic for each online generated response is extracted** using GPT-4o. In this process, **external knowledge is introduced again to assist the model in logic reasoning and human preferences learning**.\\nExperimental results demonstrate that the combination of these two methods can both enhance the model's judgment ability and improve its generative capability.\\n\\nFor example, there is a conversation, \\u201cWhat do you think of Aeroflot?\\u201d \\u201cI was on this airline once, and arrived an hour early, and finally I have been divorced three years.\\u201d. Aeroflot has nothing to do with divorce seemingly. But they could be related by this way, \\\"Aeroflot $\\\\rightarrow$ Well-known feature is fast $\\\\rightarrow$ It could arrive an hour early $\\\\rightarrow$ It leads to trip disrupted $\\\\rightarrow$ So something to ruin the marriage was found $\\\\rightarrow$ Finally the responder is divorced.\\\" From the example, we can conclude that there is a sparse knowledge graph between question and answer. **The key of making a creative leap is to mitigate the information insufficiency issue. To realize it, the injection of knowledge beyond the literal is necessary to reasoning, which is the goal of CLoST.**\\n\\n2. Judgement model's importance.\\n\\n**Judgement ability is a fundamental skill of LLMs to further empower their reasoning ability.** Some researches have verified that reward model is important for reasoning methods such as Monte Carlo tree search or the reinforcement strategy learning.\\n**Reward model helps to optimize the behavior of large language models by providing feedback so that they produce outputs that are more in line with expectations.** Due to the subjectiveness of humor, a scalar score may include great noise. **Choosing-best-from-multiple-choices is then a fundamental to establish an evaluation metric.**\\n\\n3. CLoT is not be claimed clearly, and humor generation performance compare.\\n\\nCLoT develops two basic ability to facilitate humor understanding and generation, selection skill and ranking skill. The innovation of CLoT is the introduction of nouns as instruction receipt, thereby enhancing generalization.\\nHowever, as mentioned in CLoT, it directly fine-tuning on the given creative data merely amounts to a rigorous fitting of the data. This fitting process only captures the inherent creative patterns within the data, failing to stimulate \\u201dthinking outside the box\\u201d for generating novel ideas. Furthermore, creative data is inherently scarce, and relying solely on dataset fitting easily leads to being trapped in local patterns. (we modify it in line 49 of revised paper).\\n\\nRegarding the humor generation, humor is subjective, and judgements from diversified groups may vary.\\nIn terms of expression differences, CLoST could be more human-like and short which leaves more room for imagination. From a statistical perspective in Figure 6(b), **human evaluation shows that CLoST gain more preference.**\"}", "{\"comment\": \"Because the field of jokes is quite new, we appreciate your feedback so that we can better express our innovation.\\n\\n1. Not well explained (like Problem definition). \\n\\n**Humor reasoning is a multi-hop process, and each hop is based on external knowledge injection and proper rationales. Without them, it is difficult for the model to understand the internal humorous logic, making it prone to pattern recognition.** Therefore, **AAIE is proposed to inject and augment knowledge into the original training data. It will be facilitative for LLMs to understand the underlying logic and rationale.**\\nAnd then **in the stage of GESIT, the reasoning logic for each online generated response is extracted** using GPT-4o. In this process, **external knowledge is introduced again to assist the model in logic reasoning and human preferences learning.**\\nExperimental results demonstrate that the combination of these two methods can both enhance the model's judgment ability and improve its generative capability.\\n\\nFor example, there is a conversation, \\u201cWhat do you think of Aeroflot?\\u201d \\u201cI was on this airline once, and arrived an hour early, and finally I have been divorced three years.\\u201d. Aeroflot has nothing to do with divorce seemingly. But they could be related by this way, \\\"Aeroflot $\\\\rightarrow$ Well-known feature is fast $\\\\rightarrow$ It could arrive an hour early $\\\\rightarrow$ It leads to trip disrupted $\\\\rightarrow$ So something to ruin the marriage was found $\\\\rightarrow$ Finally the responder is divorced.\\\" From the example, we can conclude that there is a sparse knowledge graph between question and answer. **The key of making a creative leap is to mitigate the information insufficiency issue. To realize it, the injection of knowledge beyond the literal is necessary to reasoning, which is the goal of CLoST.**\\n\\n2. Datasets details.\\n\\nThe in-house data is private and is not suitable for disclosure. But We have collected Ruozhiba dataset and generate responses by CLoST. They are manually reviewed and screened to be public as research tool. And then we will publish the full code after the paper is accepted.\\n\\n3. The evaluation metrics and experimental setup.\\n\\n With respect to the multiple-choices questions, there are varying difficulty levels in the dataset, and it is variation primarily reflected in the choice options.\\nWe collect data from varied humor generation games such as Origi-GO. In these games, a question gains many responses from human and these responses are ranking by human voting. We select response with different votes numbers to construct choices. Easy case is formed by responses with large disparity. Hard case is formed by small difference votes' responses. And with develop of options' number, the gap between options will shrink. Finally, we clarify that we randomly shuffle the options in training and validate sets.\"}", "{\"comment\": \"Dear Reviewer JsCo,\\n\\nI hope this message find you well. We have carefully considered your feedback and have made significant improvements to the manuscript. We truly value your insights, and your expertise has greatly contributed to enhancing the quality of our work. Could you please let us know if the revisions meet your expectations? \\u00a0As the deadline for discussion nears, we kindly ask if you could review our updated paper. We are eager to address any additional questions that may arise.\\nThank you for your invaluable support and consideration.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer SKMr,\\n\\nThanks for your positive feedback and attention in creative thinking. \\n\\nDue to compliance reasons within our company, the data is currently undergoing meticulous review to identify and remove minor amount of content with copyright issues. We are hopeful and confident that this process will be completed before our paper is published. Humor generation is a trending topic that is attracting increasing attention. **We are committed to releasing both data and code** to ensure **our work is reproducible**. We sincerely appreciate your suggestions so that we can make great improvements in the revised version. And we have the same expectation as you that this innovative work will serve as a milestone for future work and unleash the potential for LLMs in creative thinking. \\n\\nSincerely,\\n\\nAuthors\"}" ] }
CGfWyU28Pd
Why Fine-Tuning Struggles with Forgetting in Machine Unlearning? Theoretical Insights and a Remedial Approach
[ "Meng Ding", "Jinhui Xu", "Kaiyi Ji" ]
Machine Unlearning has emerged as a significant area of research, focusing on 'removing' specific subsets of data from a trained model. Fine-tuning (FT) methods have become one of the fundamental approaches for approximating unlearning, as they effectively retain model performance. However, it is consistently observed that naive FT methods struggle to forget the targeted data. In this paper, we present the first theoretical analysis of FT methods for machine unlearning within a linear regression framework, providing a deeper exploration of this phenomenon. We investigate two scenarios with distinct features and overlapping features. Our findings reveal that FT models can achieve zero remaining loss yet fail to forget the forgetting data, unlike golden models (trained from scratch without the forgetting data). This analysis reveals that naive FT methods struggle with forgetting because the pretrained model retains information about the forgetting data, and the fine-tuning process has no impact on this retained information. To address this issue, we first propose a theoretical approach to mitigate the retention of forgetting data in the pretrained model. Our analysis shows that removing the forgetting data's influence allows FT models to match the performance of the golden model. Building on this insight, we introduce a discriminative regularization term to practically reduce the unlearning loss gap between the fine-tuned model and the golden model. Our experiments on both synthetic and real-world datasets validate these theoretical insights and demonstrate the effectiveness of the proposed regularization method.
[ "Machine Unlearning", "Fine-Tuning", "Learning Theory" ]
Reject
https://openreview.net/pdf?id=CGfWyU28Pd
https://openreview.net/forum?id=CGfWyU28Pd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yh0z3AwrWb", "yFnKg1H5ul", "scJc0zMMw8", "qRiBRFpsda", "mVF4td8m0y", "lZxr2xigfd", "l7a8uS2UYH", "kkDxIwipSu", "kA1rg84kYN", "jvqv3cpQFS", "goHaqZq17h", "gdGu8ifRyL", "ci7HT9mYcx", "bbmjQbiZBP", "Z2f88hQMB0", "XL5hcJTzRZ", "VExw0lYAFq", "O8sOqJvQ0N", "LQfA8YGDvS", "Ha61wI2Dx2", "GNwA0kIWVN", "G0gS4t4s8Y", "DPBCON0Ome", "APyNP5qfOt", "6dDgKvuZpE", "5rmiiFVOz9", "4fjbjNA91S", "4LYGaCGk2i", "43NssQfy6p", "3lsfjC0kmF" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732067232120, 1732065809985, 1732640140759, 1732456272829, 1730687412518, 1732801326692, 1732064247430, 1732456257468, 1730556630071, 1730554081302, 1732456229243, 1734922203020, 1732067489386, 1730680841067, 1732657311116, 1732844303763, 1732640643647, 1737523681124, 1733196653192, 1732066357733, 1732066932323, 1732845717043, 1733196699162, 1732656687267, 1732844113793, 1732064901979, 1732801162230, 1732657609496, 1732456244150, 1732640576577 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Reviewer_bHDY" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Reviewer_bHDY" ], [ "ICLR.cc/2025/Conference/Submission5062/Reviewer_h2Yv" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Reviewer_NAAU" ], [ "ICLR.cc/2025/Conference/Submission5062/Reviewer_J9SB" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Area_Chair_WLCs" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Reviewer_h2Yv" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Reviewer_h2Yv" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Reviewer_NAAU" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Reviewer_h2Yv" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Authors" ], [ "ICLR.cc/2025/Conference/Submission5062/Reviewer_h2Yv" ] ], "structured_content_str": [ "{\"title\": \"Reponse\", \"comment\": \"We sincerely thank the Reviewer J9SB for the valuable time and positive comments on our work! We hope our following answers could address your concerns.\\n\\n**Response to Weakness 1:**\\nThanks for the comment! Overparameterized linear models are widely adopted as a foundational framework for studying learning problems (e.g., transfer learning [1], continual learning [2], In-context learning [3,4]) and can be extended to more general settings such as neural tangent kernel (NTK) analysis. While the setting of overparameterized linear regression is indeed simplistic, it provides a valuable starting point for analyzing training dynamics, capturing the trajectory of learning rather than just upper or lower bounds. \\nIn this work, we start with the overparameterized linear regression model and hope to extend the analysis to more general cases such as multi-layer neural network in the future.\\n\\n**Response to Weakness 2:**\\nThank you for your comment! Our primary motivation stems from the empirical observation that while fine-tuning can maintain model utility on the remaining data, it often struggles to effectively forget targeted data. To provide theoretical insights into this phenomenon, we begin with a simpler case where the dataset is completely separable.\\n\\nFor example, consider a dataset containing two categories: bananas and cars. Bananas have a distinct feature like \\\"elongated shape,\\\" and cars have a unique feature like \\\"mirrors.\\\" These features are entirely distinct. We then extend this setup to include overlapping features, such as color, where both bananas and cars might share a feature like \\\"yellow.\\\"\\n\\nWe recognize that most practical cases involve overlapping features. However, presenting only the overlapping case would relegate the distinct case to a special case, disrupting the presentation flow. Our goal is to demonstrate that even in the extreme case of distinct features, fine-tuning still fails to unlearn. Therefore, starting with a simpler, distinct feature setup allows us to build a foundation to explain why fine-tuning fails to unlearn. From this base, we advance the analysis to handle more complex, overlapping cases.\\n\\n**Response to Weakness 3:**\\nYes, the scenario you mention aligns with our findings in Theorem 4.1, which demonstrates that discarding overlapping features leads to an increase in the retaining loss. To address this, in the next section, we propose using both retain loss and unlearn loss as objective functions, allowing the model to effectively balance retaining overlapping features while unlearning targeted information.\\n\\n**Response to Question 1:**\\nAs mentioned in our previous response, we discuss the case of nonoverlapping features as a starting point to build intuition about the phenomenon of fine-tuning failing to unlearn. While this case may be less common in standard ML, it provides a simplified setup that allows us to derive theoretical insights and establish a foundation for analyzing more complex overlapping scenarios.\\n\\nOur current work focuses on overparameterized linear regression as a framework for machine unlearning. Extending this analysis to strongly convex settings or a two-layer linear network would indeed require different techniques, as closed-form solutions are not readily available. We agree that such extensions are valuable and will consider them as part of future work. Thank you for the suggestion!\", \"references\": \"[1] Wu, Jingfeng, et al. \\\"The power and limitation of pretraining-finetuning for linear regression under covariate shift.\\\" Advances in Neural Information Processing Systems 35 (2022): 33041-33053.\\n\\n[2] Ding, Meng, et al. \\\"Understanding Forgetting in Continual Learning with Linear Regression.\\\" arXiv preprint arXiv:2405.17583 (2024).\\n\\n[3] Wu, Jingfeng, et al. \\\"How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?.\\\" arXiv preprint arXiv:2310.08391 (2023).\\n\\n[4] Chen, Xingwu, Lei Zhao, and Difan Zou. \\\"How transformers utilize multi-head attention in in-context learning? a case study on sparse linear regression.\\\" arXiv preprint arXiv:2408.04532 (2024).\"}", "{\"title\": \"Response\", \"comment\": \"We thank the Reviewer h2Yv for the thorough review and the insightful comments. We hope our following answers can address your concerns.\\n\\n**Response to Weakness 1:**\\nOur primary motivation arises from the empirical observation that fine-tuning can maintain model utility on remaining data but struggles to effectively forget targeted data. Sections 3 and 4 provide theoretical insights to explain and address this issue. Section 5 redesigned a discriminative regularizer aligned with the principle deduced in Section 4: regularization should prioritize remaining accuracy over unlearning accuracy. In response to this weakness, we have rewritten Section 5 to emphasize the redesign rationale for the regularizer and its implications, rather than introducing it. Please check it. Thank you for the comment! \\n\\n**Response to Weakness 2:**\\nWe agree that there is limited distinction between our proposed regularization approach and existing objective functions, primarily differing in the choice of the loss function and the parameter $\\\\alpha$. However, the main novelty of our paper lies in providing the first theoretical framework to understand the empirical phenomenon of why naive fine-tuning fails to forget. Based on the analysis in Sections 3 and 4, we aim to address this phenomenon by designing suitable objectives.\\nAdditionally, Theorem 4.1 provides the rationale for selecting the parameter $\\\\alpha \\\\in(0,1]$ for the unlearning loss, whereas in [1], the constraint $\\\\alpha \\\\in(0,1]$ is applied to the retain loss. **While our objective formulation may appear similar to existing work, our novelty/contribution goes beyond the design of regularize.** In response to this weakness, we have rewritten Section 5 to modify our previous statement. Please check it. Thank you for the comment! \\n\\n**Response to Weakness 3:**\\nWe agree with the Reviewer that KL-FT and (I)CE-FT require access to the forget set, unlike Vanilla FT. We appreciate this observation and welcome the opportunity to discuss this point in the context of current machine unlearning settings. Beyond methods requiring access to the forget set, there is also the zero-shot unlearning setting [2], which achieves unlearning without access to either the forget or remaining datasets. Thus, we view this distinction as representing different settings rather than a weakness of our approach.\\nRegarding a more comprehensive evaluation, as noted in our response to Weakness 2, the primary novelty of our paper is to provide theoretical analysis and insights into the unlearning process via fine-tuning methods. Thank you for the valuable feedback!\\n\\n**Response to Minor comments 1:** \\nThank you for pointing this out! The legend in Figure 1 should be RL/UL instead of RA/UA. We corrected this in the revised version.\\n\\n**Response to Minor comments 2:** \\nThanks for the comment! Our goal is to make the fine-tuned $\\\\mathbf{w}_t$ model as close as possible to the golden model $\\\\mathbf{w}_g$. From the equation on line 289, we can observe the difference between these two solutions, which naturally leads to the following discussion.\\n\\n**Response to Minor comments 3:** \\nThanks for pointing out this! Yes, $d_o$ refers to $d_{\\\\text {lap }}$ and we corrected the notations in the revised paper.\\n\\n**Response to Minor comments 4:** \\nThank you for the comment! The function $\\\\\\\\mathcal{L}\\\\_{\\\\mathrm{KL}}(\\\\\\\\cdot)$ in Equation (6) corresponds to the KL divergence between the output of the forgotten model and the one-hot encoded random labels. Specifically, $\\\\\\\\mathcal{L}\\\\_{\\\\\\\\mathrm{KL}}=\\\\\\\\sum\\\\_{i=1}^n \\\\\\\\mathrm{KL} (\\\\\\\\mathbf{w}\\\\_t^{\\\\top}\\\\\\\\mathbf{x}\\\\_i\\\\\\\\| Y\\\\_i^{\\\\prime })$, where $\\\\mathrm{KL}\\\\left(p_i \\\\| q_i\\\\right)=\\\\sum_{j=1}^{|\\\\text{Class}|} p_i(j) \\\\log \\\\left(\\\\frac{p_i(j)}{q_i (j)}\\\\right)$. Regarding the baseline of random label or KL-based loss functions, our primary focus is on providing theoretical analysis to explain the observed empirical phenomenon. Thus, we focus our experiments on naive fine-tuning and the most closely related methods to validate our conclusions. Thank you for your feedback!\\n\\n**Response to Question 1 and Question 2:** \\nAlthough we use linear regression to analyze the fine-tuning process, our assumptions about the data structure are based on a classification problem, which is why our experiments focus on class-wise forgetting. \\n\\n**Response to Question 3:** \\nIn the table 2, we select the corresponding best $\\\\alpha \\\\in (0,1] $ for each methods ((I)CE-FT, KL-FT) to report the results.\"}", "{\"comment\": \"Thank you for your response.\\n### W1 / W2.2\\nI still have reservations about this point, along with the fact that it remains unclear how the $\\\\alpha$ value is set in your experiment. I couldn't find in Appendix A.2 how the $\\\\alpha$ value is set?\\n\\n### W2.3 \\nFor Table 2, the best entry is usually bolded. As for Figure 4b, I apologize for the imprecise comment\\u2014I can\\u2019t recall what I meant and don\\u2019t see any issue with it now.\\n\\n### W3\\nThank you for that clarification. My comment was more about whether there are missing constraints on $d_f$ or $d_r$. Don\\u2019t you need to ensure that $d_f > n_f$ and that $d_r > n - n_f$ as well?\"}", "{\"comment\": \"Thank you once again for your initial review! We have carefully addressed your comments and provided a detailed response. As the discussion period deadline approaches, we would greatly value your feedback on our response.\"}", "{\"summary\": \"This work explores a known issue with na\\u00efve fine-tuning approaches for the machine unlearning problem: it struggles to forget the targeted data. To address this problem, the authors begin by constructing a synthetic experiment (an overparameterized linear regression model) and show that, in this case, fine-tuned weights decompose into two components\\u2014one that targets the remaining data and one that can be considered the residual from the data to forget.\\n\\nBased on this decomposition, they compare two approaches that aim to reduce the unwanted component in the final solution. They find both empirically and theoretically that the approach focusing on solving for the remaining data, rather than solely forgetting the target data, performs better. This observation leads them to propose a loss function that prioritizes overall accuracy over the forgetting term. The performance of this loss is empirically evaluated on a real dataset.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The overall paper is well-structured and pleasant to read.\\n\\n2. The theoretical results inspired insights into the practical effect of the real loss, which were validated on a real data example. The theoretical section is strong and insightful. Supporting empirical experiments presented alongside each step to further ground the understanding are sensible, well presented, and convey each point effectively.\", \"weaknesses\": [\"1. I found the presentation of the two terms in the loss (Eq. 6), with one being the main term and the other the \\u201cregularizing term,\\u201d to be problematic. Unless I am mistaken, this distinction is artificially created by placing an arbitrary cap on the regularization scaling term $\\\\alpha \\\\in 0-1$, which effectively upper bounds the contribution of each term to the loss. Therefore, the second point, 2) Regularization Focus, does not represent a real difference, in my opinion, and needlessly obfuscates the work (though I appreciate the desire to differentiate from previous work). The crux of the theoretical insight is that the accuracy term should be given more importance than the unlearning term. What is the procedure to tune this $\\\\alpha$ parameter, as it is a crucial value of the experiment? This detail is missing in the main text and should definitely be included.\", \"From my understanding, the ICE-FT is effectively the same as CE-FT with $\\\\alpha \\\\in [1,\\\\infty]$ instead of being restricted to $\\\\alpha \\\\in 0,1$. (If that is not the case, please add an explicit formula for the ICE-FT and disregard the following comment). Related to that point, the optimal $\\\\alpha$ value for CE-FT should then be then as close as 1 as possible. Is it the case? In figure 4b, why is ICE and CE with alpha=1 not equal?\", \"2. The experiment on the real datasets could be improved in a few areas. First, why is only one fine-tuning baseline presented? Second, details about the tuning and training procedure are missing, which are particularly important as the authors highlighted the sensitivity to the $\\\\alpha$ parameter, and there is no single target metric to optimize for. The presentation of the results on the real datasets could also be improved. Looking at Table 2, it is hard to see how each method performs. Showing Figure 3 for all the datasets could help for example, or using UA vs. RA curves would be more convincing than presenting point predictions in a table. Additionally, the legend in Figure 4b is hidden behind the lines and has the wrong colors.\", \"3. Assumption 3.1 and Remark 1. I found the construction of the matrices $F$ and $R$ to be a little bit vague. Shouldn\\u2019t it have some constraints on$d_f$ and $d_r$ since they $w_{f*}$ and $w_{r*}$ are to be exact solutions to both problems? The remark really feels more like a part of the assumption (similar comments apply to Remark 2 paired with Assumption 3.3). The concept of feature overlapping should be clarified, as it has a different meaning than how it is usually used.\", \"These points should be clarified to understand the limits of the conclusions we can draw from this synthetic setup.\", \"Minor\", \"The discussion after Theorem 3.2 could be clarified. The constructed example is a scenario where the weights learned from the different tasks are completely orthogonal to each other, so the fine-tuning step is performed in a totally unrelated space. This is a great illustrative example to showcase how fine-tuning cannot affect the performance on the initial task. However, this is very specific to this particular crafted setting with extreme overparameterization. Therefore, it doesn\\u2019t really \\u201csuggest that the fine-tuning model is unable to forget the information it previously acquired from\\u2026\\u201d in general; it applies only to that particular model. Discussing the relations to the setup from Ding et al. (2024) would be interesting.\", \"The discussion following Theorem 3.2 and Theorem 3.4 feels somewhat repetitive, as the same points are made. You could instead focus more on discussing the differences between the two.\", \"There is no reference in the main text to the appendix.\", \"The norm in Eqs. 1, 2, and 3 is undefined.\", \"The point that \\\"we favor the principle that regularization should prioritize remaining accuracy over unlearning accuracy\\\" should be made before presenting the loss. Without it, the loss feels somewhat disconnected from the previous section.\", \"Consider introducing UA and RA earlier (perhaps as part of the problem description), as you present various results before their formal introduction.\", \"Typos and small details/suggestions.\", \"(13), (38) (109), and a few others\\u2026 typo inverted bracket \\u2018removing\\u2019 -> `removing\\u2019.\", \"Table 1: FT (Fine-Tuning) Methods. -> FT (Fine-Tuning) Method ?\", \"In Section 2, overparameterized linear regression should be defined when it is introduced (n<<d). (also typo overparamterized)\", \"You could define RL and UL with mathematical notation as they are introduced.\", \"Theorem 3.2 , missing reference to the proof in the appendix after the theorem statement.\", \"Font of Figure 1 are too small.\", \"Introduce the notation $y\\u2019$ as a wrong label to before Eqn. 6.\", \"You can drop the line ``The evaluation metrics include Unlearning Accuracy (UA), MIA-Efficacy, Retaining Accuracy (RA), Test Accuracy (TA), and RunTime'' in Table 2 caption to save space.\"], \"questions\": \"1. For Figure 2a and the accompanying discussion and conclusions, it is important to note the fraction of overlapping features, as it likely has a significant impact on these aspects. Could you comment on this point?\\n\\n2. Line 413: \\u201cNotably, the regularization parameter is typically constrained to the range (0, 1].\\u201d What is notable about that? Could you clarify the point of this comment? I couldn\\u2019t find any reference in (Fan et al., 2023) to bounding this parameter to that range.\\n\\n3. Why is UA\\u2013RA more important than RA\\u2013TA? UA seems to relate to the training set.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Regarding the CIFAR-10 example and the connection between Full Feature Overlap and random unlearning**\\n\\nI am not disagreeing that different classes can have different set of **implicit** features. But the model we are trying to unlearn with do not unfortunately have access to these nicely disentagled features that are directly tied to the output classes. If it were the case, both learning, and unlearning would be easy, no fine-tuning needed. For example, in the case of Section 4, scenario 1, once we reset the weights corresponding to $d_f$, we are done. I do not think we even have to do the fine-tuning since this weight is already the golden model. So even in the class forgetting scenario, there is at least full overlap in the \\\"explicit features\\\". We can claim that we can use some form of disentangled representation learning, and this model being analysed works on top of that, and that is fine. But then there is the case of unlearning that representation model but that is beyond the scope of what is being studied in this paper. But this is a relatively minor point, and I appreciate the explanations provided by the authors.\\n\\n\\n**Regarding difference (or lack thereof) between KL divergence and CE as utilized in this work**\\n\\nI would be quite surprised if the \\\"practical implementation differences\\\" between the two equivalent objectives would lead to statistically significant differences. The exact numbers can be different, but it would be very surprising if those lead to marked variations. I would actually be very concerned if the differences are big since that would imply wrong implementations in these machine learning libraries. However, much has been said in this submission regarding the difference between (I)CE-FT vs KL-FT (the 3 out of the 4 methods evaluated), whereas, theoretically they are all the same objectives upto (i) the choice of the hyperparameters, and (ii) the practical implementation differences of CE and KL for the same ground-truth distribution $p$ and predicted distribution $q$.\"}", "{\"title\": \"Response\", \"comment\": \"We are grateful to Reviewer bHDY for the detailed review and constructive feedback. We hope address your concerns accordingly.\\n\\n**Response to Weakness1:** \\n- (1) Setting of Regularization: You raised an important point regarding the scale of the two terms in the loss (Eq. 6). Our design intentionally restricts $\\\\alpha \\\\in (0,1]$ to prioritize retaining accuracy over unlearning accuracy, which differs from the approach in [1], where unlearning accuracy is considered more critical. We understand this distinction may appear to be an arbitrary design choice; however, it aligns with the goals of our previous conclusion: we should upperbound the unlearning accuracy instead of remaining accuracy in [1]. \\n- (2) Experimental Setting of $\\\\alpha$: In our experiment, we explore the sensitivity of the regularization parameter within the range $[0,0.1, \\\\ldots, 0.9]$. Yes, ICE-FT is exactly the same as CE-FT when $\\\\alpha \\\\in[1, \\\\infty]$. However, as shown in Figure $4b$, increasing $\\\\alpha$ did not improve the performance of unlearning accuracy, and the retain accuracy remained unchanged. \\n- (3) Difference Between ICE and CE Under $\\\\alpha$: The optimal $\\\\alpha$ for CE-FT should ideally be greater than 1; however, the analysis in [1] restricted the range of $\\\\alpha$, making 1 the best value under this limitation. Furthermore, in Figure $4b$, ICE and CE converge and are identical when $\\\\alpha=1$. To highlight the differences between the two methods, Figure $4 b$ only presents the results for $\\\\alpha$ in the range $[0.1,..., 0.9]$. \\n\\n**Response to Weakness2:** \\n- (1) Why is only one fine-tuning baseline presented? Our primary motivation stems from the empirical observation that while fine-tuning can maintain the utility of the model on the remaining data, it struggles to effectively forget the targeted data. We aim to provide theoretical analysis to explain this phenomenon. At this stage, we focus on fine-tuning-based experiments to validate our initial analysis. In the future, we plan to extend our work by including additional analysis and experiments with other unlearning methods, such as gradient ascent. \\n- (2) Additional experimental details: We have included details about tuning and training settings, along with more experimental results, in Appendix A.2. Specifically, we provide information on the unlearning setup, datasets, model architectures, and the loss values for different $\\\\alpha$ values in ICE-FT and CE-FT. \\n- (3) Improving experimental presentation: Thank you for your suggestions on improving the experimental presentation! We have addressed this by: (a) Extending Figure 3-style visualizations to all datasets. (b) Including UA vs. RA curves for CIFAR-10 in Figure 4, with the Retain curve (blue line) and Forget curve (orange line) highlighted. We are working on further extending these visualizations to other datasets. (c) Could you clarify your concerns about Table 2\\u2019s organization and the incorrect legend colors in Figure 4b? This feedback would help us improve these aspects. Thank you!\\n\\n**Response to W3:** We use the matrices $\\\\\\\\mathbf{F}$ and $\\\\\\\\mathbf{R}$ to illustrate the structure of the data matrix. Specifically, the entire data matrix can be represented as $\\\\\\\\mathbf{X}=[\\\\\\\\mathbf{X}\\\\_r, \\\\\\\\mathbf{X}\\\\_f]$, where $\\\\\\\\mathbf{X}\\\\_r^{\\\\\\\\top}=[\\\\\\\\mathbf{R}^{\\\\top}, \\\\\\\\mathbf{0}]$ and $\\\\\\\\mathbf{X}\\\\_f^{\\\\\\\\top}=[\\\\\\\\mathbf{0}, \\\\\\\\mathbf{F}^{\\\\top}]$. \\nConsequently, based on the problem setup (lines 141-157), we can deduce the existence of $\\\\\\\\mathbf{w}\\\\_*^f$ and $\\\\\\\\mathbf{w}\\\\_*^r$ such that $\\\\\\\\mathbf{w}\\\\_*^{} =\\\\\\\\mathbf{w}\\\\_*^r + \\\\\\\\mathbf{w}\\\\_*^f$, $\\\\\\\\mathbf{y}^f=\\\\\\\\mathbf{X}\\\\_f^{\\\\top} \\\\\\\\mathbf{w}\\\\_*^f$ and $\\\\\\\\mathbf{y}^r=\\\\\\\\mathbf{X}\\\\_r^{\\\\top} \\\\\\\\mathbf{w}\\\\_*^r .$ \\nThe dimensions $d\\\\_f$ and $d\\\\_r$ should be consistent with the partitioning of the feature space. The term \\\"feature overlapping\\\" in our work refers to scenarios where some features are shared between $\\\\\\\\mathbf{X}\\\\_r$ and $\\\\\\\\mathbf{X}\\\\_f$.\"}", "{\"comment\": \"Thank you once again for your initial review! We have carefully addressed your comments and provided a detailed response. As the discussion period deadline approaches, we would greatly value your feedback on our response.\"}", "{\"summary\": \"This paper works on why the current fine-tuning unlearning method cannot perform well in many unlearning tasks. This paper provides a theoretical analysis within a linear regression framework to show when fine-tuning retains model performance on remaining data, it cannot fully remove the influence of the forgetting data. Then this paper proposes a discriminative regularization term to close the performance gap between fine-tuned model and retrained model. The experimental results validate the effectiveness of this approach in improving unlearning accuracy.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The topic of this paper is quite interesting. The fine-tuning approach is one of the mainstream approaches to unlearning. However, such methods are usually unstable across different unlearning tasks and datasets. Thus, the research on why it can fail is meaninful.\\n2. This paper provides a theoretical analysis of the linear regression model for the analysis.\\n3. The experimental results can clearly show the performance improvements compared with the other fine-tuning methods.\", \"weaknesses\": \"The total contribution of the paper is not enough:\\n\\n1. Theoretical Analysis: Target on theorem 3.2 and 3.4: this paper claims that the MSE on remaining data and forgetting data keep 0 during fine-tuning for overparameterized models. However, in real-world datasets, the model cannot fit the training data perfectly, and the two theorems are hard to extend to other larger models and datasets. In addition, such analysis is based on a regression model, while the following experiment part is mainly based on classification tasks. Whether the theoretical analysis on regression can be extended to classification still needs to be proved. \\n\\n2. Discriminative Regularization: This paper does not explicitly show the loss of Inverse CE-FT. Is it simply to remove the hyperparameter $\\\\alpha$ from the second term to the first term? If so, I cannot find a significant difference between Inverse CE-FT and original CE-FT. In addition, regarding the loss function of KL-FT, many other methods have tried to incorporate KL loss to align the output logits [1] or interlayer embeddings [2, 3]. Therefore, the proposed Discriminative Regularization does not show any improvement compared with previous works.\\n\\n3. Experiment results: This paper only conducts experiments on single-class unlearning (classes 3,6 and 9). In addition, this paper only compares the proposed method with naive fine-tuning and the loss proposed in [4]. It is not sufficient to prove the effectiveness of the proposed methods. This paper can include more SOTA unlearning methods in the recent two years and compare unlearning results in more complex settings like random sample unlearning or backdoor attack unlearning.\\n\\n4. The technical part of this paper needs to be improved. Some notations need to be further checked, for example, $1-n_f$ in line 194.\\n\\n[1] Chundawat, Vikram S., et al. \\\"Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 6. 2023.\\n\\n[2] Chundawat, Vikram S., et al. \\\"Zero-shot machine unlearning.\\\" IEEE Transactions on Information Forensics and Security 18 (2023): 2345-2354.\\n\\n[3] Shen, Shaofei, et al. \\\"CaMU: Disentangling Causal Effects in Deep Model Unlearning.\\\" Proceedings of the 2024 SIAM International Conference on Data Mining (SDM). Society for Industrial and Applied Mathematics, 2024.\\n\\n[4] Fan, Chongyu, et al. \\\"Salun: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation.\\\" arXiv preprint arXiv:2310.12508 (2023).\", \"questions\": \"1. What do the distinct and overlap features mean? Could the author give some examples to explain it?\\n\\n2. Considering that this paper mainly conducts class-wise unlearning experiments. How do different methods perform under the evaluation of relearn time [1]?\\n\\n[1] Chundawat, Vikram S., et al. \\\"Zero-shot machine unlearning.\\\" IEEE Transactions on Information Forensics and Security 18 (2023): 2345-2354.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates machine unlearning, which aims to protect user privacy by removing specific data from trained models in response to data deletion requests. The authors examine why fine-tuning often fails to fully erase targeted data. They consider over-parameterized linear regression in the case of overlapping and no overlapping features. They propose a regularization term that diminishes the role of forgetting. Experimental results on both synthetic and real-world datasets validate that this regularization approach significantly enhances unlearning performance.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Studying unlearning from the perspective of overparametrizied regression is a great concept. This setting (even though questionable in practice) allows to perform theoretical analyses.\\n\\nThe entire concept of introducing a regularization term to unlearning is very sound and novel. \\n\\nThe experimental results show improvements if such a term is included.\", \"weaknesses\": \"The setting for the analyses is simplistic. It would be great to consider more general cases (for example strongly convex).\\n\\nI think the vast majority of the practical cases consider 100% overlapping cases which puts in question the bulk of the analyses. \\n\\nThe distinct features section is a special case of the overlapping section and thus it should be omitted. I think the distinct features results are not stronger and thus they are a 'strict' special case. \\n\\nOption B is 'void' if all of the features are overlapping which captures the majority of the use cases.\", \"questions\": \"1. Why bothering with the case of non overlapping features? While such cases sometimes occur in FL they are a much more seldom occurrence in standard ML.\\n2 Can the analyses be done for the strongly convex case? It seems it would require a completely different approach since a closed form expression is not available in such a case. What about a 2 layer linear network?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you once again for your initial review! We have carefully addressed your comments and provided a detailed response. As the discussion period deadline approaches, we would greatly value your feedback on our response.\"}", "{\"metareview\": \"Summary\\n\\nThis paper studies specially the fine-tuning problem in machine unlearning. As it is observed in many papers that fine-tuning using the retaining data only will not incur a good enough forgetting performance as the retrain, the paper explores further this phenomenon by using a simple linear regression model, and simulated simplified data, from both theoretical and empirical perspectives. The identified insights from these theoretical and empirical studies are that \\u2013 when both retaining data and forgetting data are considered, more weight should be given to the retaining data. The paper then demonstrates empirically that such insights are effective.\\n\\nStrengths\\n\\n1. Reviewer(s) pointed out that the fine-tuning is efficient and does not require forgetting data, so it is worth exploring direction.\\n2. The paper provides some theoretical studies and insights.\\n3. Empirical results demonstrate the effective of the insights. \\n\\nWeaknesses\\n\\n1. The whole analysis and empirical study to draw the most important insights and conclusions regarding fine-tuning is based on an oversimplified model (linear regression) and an oversimplified dataset (on which it is easy to tell different classes can be related to different features). It is not convincing that such a result or insight drawn is practical enough. \\n2. The most important insight drawn is that more weight should be given to retaining loss instead of forgetting loss in machine unlearning. Such a conclusion is anti-intuition and is in contradiction with the main motivation of the paper that fine-tuning on retaining data sole is not effective in forgetting. This insight is claimed to be one of the most important contributions of the paper, but not sufficiently due to such contradiction exists \\u2013 such contradiction may be an important observation (but the paper still needs to show that both losses are necessary but just one should be given fewer weights), or a result from the oversimplified study but not extendable to more practical cases. The analysis and study around this insight is not sufficient in the current paper. \\n3. The studying point of the paper is some empirical evidence \\u2013 when the retaining error is zero, the forgetting is far below the gold model (retraining model). I believe the problem with fine-tuning still exists, but an empirical study forcing the retaining error to be zero could be too strict and potentially could lead to overfitting. A more practical demonstration of the fine-tuning issue is recommended. \\n4. While the conclusion of the paper still recommends using forgetting data, then the pure fine-tuning methods may not be worth studying from the beginning. The paper needs to find a better way to argue the meaning of studying the fine-tuning problem while forgetting data is finally used. If forgetting data is not used, then the fine-tuning problem is worth studying; but if the forgetting data is used, the story could be different \\u2013 which has not been sufficiently discussed in the paper either. \\n\\nRecommendation\\n\\nI recommend rejecting the paper at the current phase, due to the unsolved issues concerning the oversimplified model and problem setting, and the insufficient arguments about the contradiction of the insight provided from the theoretical results (especially given that the insight comes from an oversimplified study). On the other hand, I believe if the arguments and insights are considered in a more comprehensive way, the direction is worth a study even with the oversimplified theoretical studies. The theoretical studies themselves are good to start from something simple.\", \"additional_comments_on_reviewer_discussion\": \"Two reviewers give six to the paper while one of them is still concerned that the oversimplified theoretical study may not be sufficient. The other two reviewers gave three to the paper. One of them gave two rounds of very long discussion expressing the two rounds of remaining concerns, which are mainly in Weaknesses 1 and 2. This reviewer is also concerned about the novelty of the paper in methodology; I agree with the authors' argument that their main contribution was not on methodology and the provided insights (if sufficiently argued) provided the methodology novelty compared to existing methods. Another reviewer who gave three also expressed remaining concerns on the paper, mainly in Weaknesses 3 and 4. This reviewer is also concerned that the oversimplified study cannot be extended.\\n\\nWhile I agree with the authors that the simplified study is a good starting point for theoretical analysis, which is potentially challenging, I am concerned about the anti-intuition result provided, and the contradiction with the motivation of the paper that pure fine-tuning is not good. As said, \\u2013 such contradiction may be an important observation (but the paper still needs to show that both losses are necessary but just one should be given less weight), or a result from the oversimplified study but not extendable to more practical cases. So I think the paper is not ready in the current phase, while agreeing that the direction is worth exploring from a theoretical point of view.\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely thank all reviewers and chairs for their valuable time and efforts in providing detailed feedback on our work. In particular, we appreciate the insightful reviews and constructive suggestions from Reviewers bHDY, h2Yv, NAAU, and J9SB, as well as the positive feedback from Reviewers bHDY and J9SB.\\n\\nMoreover, we would like to address some common concerns raised by the reviewers here:\\n\\n**1. Why choose overparameterized linear models and focus on class-wise unlearning experiments?**\\n\\nOverparameterized linear models are widely adopted as a foundational framework for studying learning problems (e.g., transfer learning [1], continual learning [2], In-context learning [3,4]) and can be extended to more general settings such as neural tangent kernel (NTK) analysis. While the setting of overparameterized linear regression is indeed simplistic, it provides a valuable starting point for analyzing training dynamics, capturing the trajectory of learning rather than just upper or lower bounds. \\nIn this work, we start with the overparameterized linear regression model and hope to extend the analysis to more general cases such as multi-layer neural network in the future.\\n\\nAs for the focus on regression versus classification, although we use linear regression to analyze the fine-tuning process, our\\nassumptions about the data structure are based on a classification problem, which is why our experiments focus on class-wise forgetting.\\n\\n**2. The novelty of the discriminative regularizer.**\\n\\nWe agree that there is limited distinction between our proposed regularization approach and existing objective functions, primarily differing in the choice of the loss function and the parameter $\\\\alpha$. \\n\\nHowever, the main novelty of our paper lies in providing the first theoretical framework to understand the empirical phenomenon of why naive fine-tuning fails to forget. Based on the analysis in Sections 3 and 4, we aim to address this phenomenon by designing suitable objectives. Additionally, Theorem 4.1 provides the rationale for selecting the parameter $\\\\alpha \\\\in(0,1]$ for the unlearning loss, whereas in [1], the constraint $\\\\alpha \\\\in(0,1]$ is applied to the retaining loss. **While our regularizer may appear similar to existing formulations, our novelty/contribution extends beyond the regularizer to the broader theoretical and empirical insights we provide.**\\n\\nIn response to the insightful comments from Reviewers h2Yv and NAAU, we have rewritten Section 5 to emphasize the redesign rationale for the regularizer and its implications, rather than introducing it. Section 6 further validates our design through experimental results.\\n\\nThank you again for your valuable feedback!\", \"references\": \"[1] Wu, Jingfeng, et al. \\\"The power and limitation of pretraining-finetuning for linear regression under covariate shift.\\\" Advances in Neural Information Processing Systems 35 (2022): 33041-33053.\\n\\n[2] Ding, Meng, et al. \\\"Understanding Forgetting in Continual Learning with Linear Regression.\\\" arXiv preprint arXiv:2405.17583 (2024).\\n\\n[3] Wu, Jingfeng, et al. \\\"How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?.\\\" arXiv preprint arXiv:2310.08391 (2023).\\n\\n[4] Chen, Xingwu, Lei Zhao, and Difan Zou. \\\"How transformers utilize multi-head attention in in-context learning? a case study on sparse linear regression.\\\" arXiv preprint arXiv:2408.04532 (2024).\"}", "{\"summary\": \"The paper focuses on the fine-tuning based unlearning scheme, where approximate unlearning is obtained by performing additional learning steps with samples from the retained set to induce \\\"catastrophic forgetting\\\" of the forget set in the model. To understand failure of this technique, the paper considers the linear models and a couple of simplistic data sets where the set of non-zero features of the retain set (i) do not overlap, or (ii) partially overlap with that of the forget set. In both these cases, the theoretical results demonstrate that the fine-tuned based unlearned model (under a specific version of fine-tuning) has very different performance on the forget set compared to the gold standard unlearned model (which is retrained from scratch using only the retain set). Based on these results, the paper discusses a modification to the fine-tuning based unlearning scheme (the paper states them as forms of regularizations) where we are able to reset the parts of the model corresponding to the set of features that are only non-zero on the forget set, and demonstrates how this procedure improves unlearning performance. Based on these insights, the paper motivates the use of a fine-tuning objective for unlearning that combines both unlearning/forget accuracy and accuracy on the retain set, and empirical evaluations highlight how this combined objective improves the unlearning performance of fine-tuning while also maintaining high performance on the retain set.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"I think one of the main strengths of this paper is the focus on the fine-tuning based unlearning schemes which have (in general) various advantages such as being relatively very efficient, and not requiring the forget set for the unlearning, which has significant practical implications.\", \"weaknesses\": [\"(W1) To me, one of the main weaknesses of this paper is that there is no clear link between the theory inspired proposed \\\"regularizations\\\" in sections 3 and 4 and the empirical evaluations of section 5. The proposed regularizations (and the related and motivating theoretical analyses) require the knowledge of the distinct and overlapping feature which is usually not available. Thus, it is obvious that these regularization schemes are not practical. However, the connection between the analysis and the use of the combined objective of unlearning/forget accuracy and retain accuracy is not clear at all even in the form of motivation.\", \"(W2) Even with the considered combined loss function for fine-tuning based unlearning, it is not clear what is novel here. The combined objective has been considered before (as the paper itself mentions), but the paper claims a difference between treating one as an \\\"objective\\\" and one as a \\\"penalty\\\". The difference between what is a penalty term and what is the main objective in equations (6) and (7) seems not compelling enough. Different values of $\\\\alpha$ would lead to different effects, and there is no inherent need to restrict $\\\\alpha \\\\in [0,1]$. Often both can be written as $\\\\lambda (\\\\text{Retain Loss}) + (1 - \\\\lambda) (\\\\text{Forget Loss})$ for some $\\\\lambda \\\\in [0,1]$, and treated as a single hyperparameter which can range from focusing on the loss on the retain set and the performance on forget set. In that respect, to me CE-FT and ICE-FT are the same thing. The main difference between (I)CE-FT and KL-FT is use of KL divergence instead of cross-entropy to penalize the performance on the forget set. In that case, any difference between KL-FT and (I)CE-FT (if there is any) is attributed to the use of KL divergence.\", \"(W3) This is less of a weakness, but the evaluated method KL-FT and (I)CE-FT require access to the forget set, in contrast to vanilla FT, which does not. This removes the one advantage of FT based fine-tuning. In this case, a more full-scale evaluation across various schemes is warranted including efficient schemes such as Gradient Ascent and influence function based schemes. However, I do not see any novel \\\"method\\\" being presented here (see W2), so there is nothing to evaluate thoroughly here.\"], \"minor_comments\": [\"(C1) In Figure 1, it appears that the quantities plotted are Retain/Unlearning loss (1 - Retain/Unlearning accuracy), which is a bit confusing given that the legend mentions RA/UA, instead of RL/UL as considered in Theorems 1 and 2.\", \"(C2) In the equation at line 289, there is also a difference of $\\\\mathbf{P} \\\\mathbf{w}\\\\_*^r$ in the definition of $\\\\mathbf{w}\\\\_t$ from the $\\\\mathbf{P}\\\\_r$ in the definition of $\\\\mathbf{w}\\\\_g$ = $\\\\mathbf{P}\\\\_r \\\\mathbf{w}_*^r$. How does that affect the ensuing discussion?\", \"(C3) In lines 315-319, it is not clear what $d_o$ is as it does not appear to be defined anywhere? Is $d_o$ another name for $d_{\\\\text{lap}}$ or something else?\", \"(C4) The function $\\\\mathcal{L}_{\\\\text{KL}}(...)$ in equation (6) lacks any clear definition. There are multiple KL divergence based losses discussed in Golatkar et al (2020a), and many of them are related to the Fisher Forgetting scheme proposed therein (unless I have my references wrong). Random label is a baseline there, but I could not find a random label + KL based loss function in there. A more explicit definition here will be very useful.\"], \"questions\": [\"(Q1) Forgetting in linear regression (considered in Sections 3 and 4) would be similar to random data forgetting in classification. What is a \\\"class-wise forgetting\\\" equivalent in the regression setup?\", \"(Q2) All the evaluations are performed on class-wise forgetting while the theoretical analysis is performed for regression. Is there a reason for why the random forgetting scenario is not considered in the evaluations?\", \"(Q3) For the results in Table 2, where we have multiple unlearning metrics, how is the hyperparamter $\\\\alpha$ selected for (I)CE-FT and KL-FT?\", \"(Q4) If hyperparameter optimization is done appropriately (as mentioned above), the main difference between KL-FT and CE-FT from (unmasked) SalUn is the use of KL instead of CE with the forget set. Is there any reason / intuition why we should expect KL divergence based forget set penalty to perform better in terms of all the unlearning metrics compared to the cross-entropy based forget set penalty (that is (I)CE-FT vs KL-FT)?\", \"(Q5) In the overparameterized regime, the optimal solution to the learning problems (1)-(3) are not necessary singleton sets. Is there any reason we expect the $\\\\arg \\\\min_{\\\\mathbf{w}}$ to be a singleton set and not a set of solutions? If it is in fact not guaranteed to be a singleton set, how does that affect unlearning results in this paper?\", \"(Q6) In the overlapping feature case, if $d_{\\\\text{lap}} = d$, (that is, full overlap of features) what happens to the bounds? In this case, is there any provable difference between $L(\\\\mathbf{w}_t, D_f)$ and $L(\\\\mathbf{w}_g, D_f)$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"There are no ethical concerns in my opinion.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for the thoughtful feedback and for the opportunity to clarify some core aspects of our paper! We address each of the points raised below to enhance the clarity and understanding of our work.\\n\\n**Response to Counterintuitive Nature of \\\"RA over UA\\\":** \\n\\nWe understand that it might seem counterintuitive that the regularization involves \\\"prioritizing retaining accuracy over unlearning accuracy\\\" in fine-tuning-based unlearning. In standard fine-tuning methods, the optimization focuses solely on retaining accuracy (RA) without explicitly considering unlearning accuracy (UA). This lack of an unlearning objective is precisely why naive fine-tuning often fails to effectively unlearn.\\nOur method revisited a regularization term that explicitly balances RA and UA. By incorporating both objectives into the fine-tuning process, we can improve unlearning performance while maintaining acceptable performance on the retained data. This regularization is not inherent in standard fine-tuning, which does not address UA at all. Therefore, the discussion on regularizer differs from naive fine-tuning by directly incorporating unlearning into the optimization objective.\\n\\n**Response to M1: Gap Between Fine-Tuned Model and Golden Model:**\\n\\nIn Section 3, we analyze scenarios where fine-tuning, as defined in equation (2), does not achieve the same unlearning performance as the golden model. We acknowledge that when the overlapping dimension $d_{\\\\text {lap }}=d$, the situation requires further examination.\\n\\n**Addressing Note A**:\\n\\nWhen $d = d_{lap}$, the solutions for each problem should be $\\\\\\\\mathbf{w}\\\\_o^{} = \\\\\\\\mathbf{P}\\\\\\\\mathbf{w}\\\\_*^{}$, $\\\\\\\\mathbf{w}\\\\_g^{} = \\\\\\\\mathbf{P}\\\\_r\\\\mathbf{w}\\\\_*^{}$, $\\\\\\\\mathbf{w}\\\\_t^{} = \\\\\\\\mathbf{P}\\\\\\\\mathbf{w}\\\\_*^{}$. Therefore, the loss for Note A should be $$L(\\\\\\\\mathbf{w}\\\\_g^{}, D\\\\_f) = \\\\\\\\frac{1}{n\\\\_f}\\\\\\\\|\\\\\\\\mathbf{X}\\\\_f (\\\\\\\\mathbf{P}\\\\_r - \\\\\\\\mathbf{I})\\\\\\\\mathbf{w}\\\\_*^{}\\\\\\\\|^2 \\\\\\\\neq L(\\\\\\\\mathbf{w}\\\\_t^{}, D\\\\_f) = 0.$$ \\nThere is still a gap between fine-tuned model $\\\\mathbf{w}_t$ and $\\\\mathbf{w}_g$. This demonstrates that even when all features overlap, the fine-tuned model does not match the golden model's unlearning performance.\\n\\n**Addressing Note B**:\\n\\nThe reviewer suggests that the gap arises due to the specific formulation of the fine-tuning objective, particularly the $\\\\\\\\|\\\\mathbf{w}-\\\\mathbf{w}_o\\\\\\\\|^2$ term in equation (2). In the following, we show that when considering the fine-tuning process as $$\\n\\\\arg \\\\min\\\\_{\\\\mathbf{w}} \\\\\\\\|\\\\mathbf{w}\\\\\\\\|^2 \\\\text { s. t. (i) } \\\\mathbf{y}_t=\\\\mathbf{X}_t^{\\\\top} w, \\\\text { (ii) }\\\\\\\\|\\\\mathbf{w}-\\\\mathbf{w}_o\\\\\\\\|^2 \\\\leq \\\\epsilon\\n$$\\nThere are still gaps between fine-tuned model $\\\\mathbf{w}_t$ and golden model $\\\\mathbf{w}_g$. The results can differ based on whether the constraint is active or inactive: \\n\\n**1) The Inequality Constraint is Inactive.**\\n\\nThen the fine-tuned model for distinct case (overlapping case can be derived similarly) is $\\\\mathbf{w}_t = \\\\mathbf{P}_t \\\\mathbf{w}_r^* $ and the results are $L(\\\\mathbf{w}_t, D_f) = \\\\frac{1}{n_f}\\\\\\\\|\\\\mathbf{X}_f\\\\mathbf{w}_f^*\\\\\\\\|^2 = L(\\\\mathbf{w}_g, D_f)$, and $L(\\\\mathbf{w}_t, D_r) = \\\\frac{1}{n_r}\\\\\\\\|\\\\mathbf{X}_r(\\\\mathbf{P}_t -\\\\mathbf{I})\\\\mathbf{w}_r^*\\\\\\\\|^2 \\\\neq L(\\\\mathbf{w}_g, D_r)$. In this case, fine-tuned model $\\\\mathbf{w}_t$ indeed can achieve same unlearning performance as golden model $\\\\mathbf{w}_g$, while there is a gap on the retaining performance.\\n\\n**2) The Inequality Constraint is active.**\\n\\nIf the solution exists, then active inequality constraint $\\\\\\\\|\\\\mathbf{w}_t-\\\\mathbf{w}_o\\\\\\\\|^2=\\\\epsilon$ holds. The results become:\\n$$L(\\\\\\\\mathbf{w}\\\\_t, D\\\\_f)= \\\\\\\\frac{\\\\\\\\epsilon}{n\\\\_f}\\\\\\\\|\\\\\\\\mathbf{X}\\\\_f\\\\\\\\|^2 , \\\\\\\\quad \\\\\\\\text{and} \\\\\\\\quad L(\\\\\\\\mathbf{w}\\\\_t, D\\\\_r)= \\\\\\\\frac{\\\\\\\\epsilon}{n\\\\_r}\\\\\\\\|\\\\\\\\mathbf{X}\\\\_r\\\\\\\\|^2 .$$ \\n\\nIn this case, it can be observed that regardless of whether the features are distinct or overlapping a consistent gap related to $\\\\epsilon$ exists between the fine-tuned model and the golden model. When $\\\\epsilon=0$, the scenario reduces to our analysis, demonstrating that similar results arise under this problem formulation. Thanks for the insightful comment, we will include this discussion in our revised version.\"}", "{\"comment\": \"We thank the reviewer once again for the thoughtful responses and insightful comments.\\n\\n**Response to Main Concern:**\\n\\nWe understand and appreciate the reviewer's primary concern regarding the differing conclusions under different conditions, specifically:\\n1. When the fine-tuning dataset $D_t$ is similar to $D_r$, and the inequality constraint becomes inactive.\\n2. When the inequality constraint is active, and $\\\\epsilon$ is sufficiently large.\\n\\nFor the first scenario, if the inequality constraint is inactive, the fine-tuned model $\\\\mathbf{w}_t$ can achieve perfect unlearning/retention loss, same as the golden model $\\\\mathbf{w}_g$. However, in what situations does the inequality become inactive? If this occurs, the problem becomes disconnected from the pretrained model $\\\\mathbf{w}_o$, meaning the fine-tuned model cannot truly be regarded as a fine-tuned model since it does not utilize the pretrained model's information.\\n\\nFor the second scenario, if $\\\\epsilon$ is large, it suggests that the pretrained model does not significantly contribute to the fine-tuned model. Consequently, both the remaining loss and unlearning loss deviate from the golden model and produce suboptimal results.\\n\\nRegarding Section 4, the elimination of forgetting data features assumes that the structure of the data/model is already known, as noted in Minor 1. This assumption is indeed restrictive in practice. To address this, we propose adding another unlearning loss function to the naive fine-tuning approach, which aligns with the overarching idea of eliminating features associated with the forgetting data.\\n\\nWe are grateful for the reviewer's insightful comments and will incorporate these discussions into our revised version. However, we still believe the current problem setup is the most suitable way to describe the unlearning process compared to the scenarios discussed above.\\n\\n**Response to Minor 1:**\\n\\nWe agree with the reviewer that accessing nicely disentangled features is not always feasible. For analytical purposes, we started with a simplified case but are keen to extend this framework to deeper analyses with more generalized assumptions and models. There is substantial existing work on feature learning that describes similar challenges, such as [1], and we aspire to extend our work along these lines.\", \"reference\": \"Allen-Zhu, Zeyuan, and Yuanzhi Li. \\\"Towards understanding ensemble, knowledge distillation and self-distillation in deep learning.\\\" arXiv preprint arXiv:2012.09816 (2020).\\n\\n**Response to Minor 2:**\\n\\nThank you for the reviewer's feedback. During our experiments, we observed that the choice of function (CE or KL) leads to only minor differences in the outcome. The more significant factor lies in selecting the appropriate parameter. That's why we follow your suggestion to rewrite the Section 5 part. Thanks again for your suggestion!\"}", "{\"comment\": \"Minor: Given a ground-truth distribution $p$ and a predicted distribution $q$ (which we are optimizing over), is it not the case that minimizing the cross-entropy loss\\n$$\\\\min_q CE(p | q) = \\\\min_q -\\\\sum_i p(i) \\\\log q(i)$$\\nis equivalent to minimizing the KL-divergence\\n\\n\\\\begin{align*}\\n\\\\min_q KL(p|q) & = \\\\min_q \\\\sum_i p(i) \\\\log \\\\frac{p(i)}{q(i)} \\\\newline\\n & = \\\\min_q \\\\sum_i \\\\left[ p(i) \\\\log p(i) - p(i) \\\\log q(i) \\\\right] \\\\newline\\n & \\\\equiv \\\\min_q - \\\\sum_i p(i) \\\\log q(i)\\n\\\\end{align*}\\n\\nWhat am I missing here if they are equivalent optimization problems? Is the actual difference that we use different ground-truth distributions $p$ for CE vs KL?\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer h2Yv,\\n\\nThank you for taking the time to review our paper. As the deadline approaches, if we have successfully addressed your concerns, we kindly request you to consider raising your score.\\n\\nBest regards,\\nThe Authors\"}", "{\"title\": \"Continued Response\", \"comment\": \"**Response to Question 4:**\\nThank you for the question! The primary difference between KL-FT and CE-FT lies in the formulation of the forget dataset penalty. Specifically, KL-FT uses KL divergence: $\\\\mathrm{KL}(p \\\\| q)=\\\\sum_i^{\\\\mid \\\\text {Class } \\\\mid} p(i) \\\\log \\\\frac{p(i)}{q(i)}$, which measures how well the predicted distribution $q$ approximates the true distribution $p$. CE-FT uses cross-entropy: $\\\\mathrm{CE}(p \\\\| q)=-\\\\sum_i^{\\\\mid \\\\text {Class } \\\\mid} p(i) \\\\log q(i)$, which measures the discrepancy between the true labels and predicted probabilities by focusing on the correct class and penalizing incorrect predictions.\\nIn our paper, we aim to ensure the fine-tuning process learns an incorrect distribution for the forget dataset, which is why KL divergence is included for completeness. However, whether KL-FT performs better than CE-FT depends on the dataset, task, and evaluation metrics. In practice, their performance may vary depending on hyperparameter tuning and the structure of the forget set. We will further investigate this distinction in future work. Thank you for raising this insightful question!\\n\\n**Response to Question 5:** \\nThank you for the question! In our paper, we consider that, in the overparameterized setting, the solutions for the MSL loss (line 161) are not singleton sets. However, among these solutions, we focus on the one with the smallest $\\\\ell_2$-norm of the parameter change, as specified in Eqs. (1), (2), and (3). Consequently, the learning problems (1)-(3) represent unique solutions.\\n\\n**Response to Question 6:** \\nThanks for bring this point to us! If $d\\\\_\\\\text{lap} = d$, then we have $L(\\\\mathbf{w}\\\\_t, D\\\\_f) = 0$ and $L (\\\\\\\\mathbf{w}\\\\_g, D\\\\_f) = \\\\\\\\| ( \\\\\\\\mathbf{P}\\\\_r -\\\\\\\\mathbf{I} ) \\\\\\\\mathbf{w}\\\\_*\\\\\\\\|\\\\_{\\\\\\\\frac{1}{n\\\\_f} \\\\\\\\mathbf{X}\\\\_f\\\\\\\\mathbf{X}\\\\_f^{\\\\\\\\top}}.$\\nIt can be observed that a gap still exists between the golden model and the fine-tuned model, even when we assume a full overlap of features. This gap arises due to the difference between the golden model solution $\\\\\\\\mathbf{P}\\\\_r \\\\\\\\mathbf{w}\\\\_*^{}$ and real solution $\\\\\\\\mathbf{w}\\\\_*^{}$.\", \"reference\": \"[1] Fan, Chongyu, et al. \\\"Salun: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation.\\\" arXiv preprint arXiv:2310.12508 (2023).\\n\\n[2] Chundawat, Vikram S., et al. \\\"Zero-shot machine unlearning.\\\" IEEE Transactions on Information Forensics and Security 18 (2023): 2345-2354.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the Reviewer NAAU for the detailed review. We hope our following answers can address your concerns.\\n\\n**Response to Weakness 1:** Thanks for the point! Overparameterized linear models are widely adopted as a foundational framework for studying learning problems (e.g., transfer learning [1], continual learning [2], In-context learning [3,4]) and can be extended to more general settings such as neural tangent kernel (NTK) analysis. While the setting of overparameterized linear regression is indeed simplistic, it provides a valuable starting point for analyzing training dynamics, capturing the trajectory of learning rather than just upper or lower bounds. \\nAs for the focus on regression versus classification, although we use linear regression to analyze the fine-tuning process, our\\nassumptions about the data structure are based on a classification problem, which is why our experiments focus on class-wise forgetting. In this work, we start with the overparameterized linear regression model and hope to extend the analysis to more general cases such as multi-layer neural network in the future.\\n\\n**Response to Weakness 2:** Thank you for pointing out this! We acknowledge that there is limited distinction between ICE-FT approach and CE-FT, primarily differing in the choice of the parameter $\\\\alpha$. However, based on the analysis in Sections 3 and 4, we aim to understand the empirical phenomenon of why naive fine-tuning fails to forget. Therefore, Theorem 4.1 provides the rationale for selecting the parameter $\\\\alpha \\\\in(0,1]$ for the unlearning loss (the design of ICE-FT), whereas in [5], the constraint $\\\\alpha \\\\in(0,1]$ is applied to the retain loss. **While our objective formulation may appear similar to existing work, our novelty/contribution goes beyond the design of the regularizer.** In response to this weakness, we have rewritten Section 5 to modify our previous statement. Please check it. Thank you for the comment!\\n\\n**Response to Weakness 3:**\\nWe appreciate the Reviewer\\u2019s comment and agree that our experiments do not cover more complex unlearning settings or comparisons with a wide range of SOTA methods. However, the primary focus of our paper is to understand the empirical phenomenon of why naive fine-tuning fails to forget, as analyzed in Sections 3 and 4. The experiments are designed to validate the conclusions derived from our theoretical framework rather than to benchmark against numerous unlearning methods.\\n\\n**Response to Weakness 4:**\\nThank you for pointing this out! We have corrected this notation in our revised version.\\n\\n**Response to Question 1:**\\nDistinct features refer to features unique to either forgetting data or retaining data, while overlapping features are shared between the two. For example, consider a dataset containing two categories: bananas and cars. Bananas have a distinct feature like an \\\"elongated shape,\\\" and cars have a unique feature like \\\"mirrors.\\\" These features are entirely distinct. We then extend this setup to include overlapping features, such as color, where both bananas and cars might share a feature like \\\"yellow.\\\"\\n\\n**Response to Question 2:**\\nThank you for your question! In the machine unlearning community, various evaluation metrics and settings exist, including relearn time, as mentioned by the Reviewer. Relearn time, as defined in [6], measures the time needed to achieve a margin of $\\\\alpha \\\\\\\\%$ around the original accuracy in a zero-shot unlearning setting, where $\\\\alpha \\\\%$ reflects the accuracy range of the original model on the forget classes. In Table 2, we provide the runtime efficiency of our methods, which is similar to relearn time defined in [7] but differs slightly as we report the entire training process rather than just a few epochs needed to achieve the margin.\", \"references\": \"[1] Wu, Jingfeng, et al. \\\"The power and limitation of pretraining-finetuning for linear regression under covariate shift.\\\" Advances in Neural Information Processing Systems 35 (2022): 33041-33053.\\n\\n[2] Ding, Meng, et al. \\\"Understanding Forgetting in Continual Learning with Linear Regression.\\\" arXiv preprint arXiv:2405.17583 (2024).\\n\\n[3] Wu, Jingfeng, et al. \\\"How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?.\\\" arXiv preprint arXiv:2310.08391 (2023).\\n\\n[4] Chen, Xingwu, Lei Zhao, and Difan Zou. \\\"How transformers utilize multi-head attention in in-context learning? a case study on sparse linear regression.\\\" arXiv preprint arXiv:2408.04532 (2024).\\n\\n[5] Fan, Chongyu, et al. \\\"Salun: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation.\\\" arXiv preprint arXiv:2310.12508 (2023).\\n\\n[6] Chundawat, Vikram S., et al. \\\"Zero-shot machine unlearning.\\\" IEEE Transactions on Information Forensics and Security 18 (2023): 2345-2354.\\n\\n[7] A. Golatkar, A. Achille, and S. Soatto, \\u201cEternal sunshine of the spotlessnet: Selective forgetting in deep networks,\\u201d.\"}", "{\"comment\": \"Thank you for your continued engagement with our work.\\n\\n**Response to Comment 1:**\\nThe well-trained fine-tuned model should achieve 0 loss on the fine-tuned (retaining) data but should not exhibit 0 loss on untrained (forgetting) data, even in overfitting scenarios. However, our empirical results in Table 1 show that naive fine-tuning models fail to effectively forget the forgetting data. This suggests that the fine-tuned model retains information about unseen data from the pretrained model during the naive fine-tuning process. For instance, Table 2 demonstrates that the naive fine-tuning method achieves nearly perfect remaining accuracy (99.76%) on the CIFAR-10 dataset, reinforcing the observation that naive fine-tuning struggles to effectively forget. \\n\\n**Response to Comment 2:**\\nThe key difference between our redesigned regularizer and those in previous works lies in the choice of the parameter $\\\\alpha$. As analyzed in the paper, we emphasize that regularization should prioritize retaining accuracy over unlearning accuracy. This principle ensures that the fine-tuning process does not compromise the model\\u2019s utility for the remaining data, and we validate this design choice through our experiments.\\n\\n**Response to Comment 3:**\\nAs discussed in our previous response, our analysis supports the principle that regularization should prioritize retaining accuracy over unlearning accuracy. The experimental results for ICE-FT benefit from CE-FT (the prior method) by adhering to this principle. For future work, we aim to provide deeper insights into the design of unlearning methods. However, our current focus is on establishing the first theoretical framework and analysis for unlearning.\\n\\nThanks again for your time!\"}", "{\"comment\": \"Dear Reviewer NAAU,\\n\\nThank you for taking the time to review our paper. As the deadline approaches, if we have successfully addressed your concerns, we kindly request you to consider raising your score.\\n\\nBest regards, \\n\\nThe Authors\"}", "{\"comment\": \"We sincerely thank you for your thoughtful comments and constructive feedback. We hope to address your concerns as follows:\\n\\n**Response to W1 / W2.2:**\\nIn Table 2, the $\\\\alpha$ values are selected based on the best performance for each method. For example, for CE, we use $\\\\alpha=0.8$; for ICE, $\\\\alpha=0.1$; and for $\\\\mathrm{KL}, \\\\alpha=0.2$. In Table 4, we provide each result with varying $\\\\alpha$.\\n\\n**Response to W2.3:**\\nThank you for pointing this out! We will ensure the best entries in Table 2 are bolded.\\n\\n**Response to W3:**\\nThank you for the observation. We will include this assumption to clarify the constraints.\\n\\nThank you once again for your valuable time and positive feedback. We greatly appreciate your efforts in helping us improve our work!\"}", "{\"title\": \"Response to author replies\", \"comment\": \"I thank the authors for the point-by-point responses, but I am still confused about the following points:\\n1. Regarding the theoretical analysis, in an overparameterized linear model, it is easy to show that the well-trained model can achieve a 0 loss for all data due to overfitting and then claim the fine-tuning will never work because the remaining data loss has been minimized to 0 even in the original well-trained model. But for other datasets, even for the simplest ones like MNIST, is it possible to achieve an exact 0 training loss? The usage of overparameterized linear regression might not be appropriate for the unlearning analysis.\\n\\n2. I have reread through the revised Section 5, and I cannot understand the differences between the proposed 'regularizer' and the retaining data loss term in many previous works like Salun, Bad-Teacher and so on. Such papers have tried KL or CE or embedding alignment for unlearning. Thus, I wonder what are the differences between the proposed loss function and the previous ones.\\n\\n3. Then, regarding the experiments, I understand that this paper does not aim to propose some new benchmarks but focuses on the analysis of why fine-tuning fails. But do the experiments aim to provide some insights about different forms of fine-tuning loss in unlearning? However, without considering the complete SOTA methods, how can we guarantee the experimental results can benefit such methods or future research works?\"}", "{\"title\": \"Response\", \"comment\": \"**Response to Minor 1:**\\nThank you for your insightful comment! Yes, in this work, we start with the overparameterized linear regression model and aim to extend the analysis to more general cases in the future. We will revise our statements to clarify that the conclusions are restricted to our specific case. Additionally, we provide more discussion on the relationship between our setup and Ding's work here. Both studies are based on the overparameterized linear regression model. However, the key difference is that our case involves two tasks: the first task is pretraining on all the data, and the second task is fine-tuning on a subset of the data. In contrast, Ding's work focuses on the continual learning setting, where each task has its own independent dataset.\\n\\n**Response to Minor 2:**\\nThank you for the suggestion. Indeed, the overlapping case can be seen as a generalization of the distinct case. The discussions following Theorems 3.2 and 3.4 aim to highlight the fact that naive fine-tuning fails to achieve effective unlearning. In the subsequent sections, we shift our focus to the differences between the two cases.\\nWe will revise the discussion to reduce redundancy and place greater emphasis on how the two cases are related and how they differ. Thanks!\\n\\n**Response to Minor 3:**\\nThank you for pointing this out! We revised the paper to include clear references to the appendix in the main text.\\n\\n**Response to Minor 4:**\\nThank you! It should be the $\\\\\\\\|\\\\cdot\\\\\\\\|\\\\_2$ norm. We corrected this in the revision.\\n\\n**Response to Minor 5:** \\nThank you for the suggestion! We rewrote Section 5 to better connect to the previous section, please check it.\\n\\n**Response to Minor 6:** \\nThank you for the suggestion! We introduced UA(UL) and RA(RL) earlier in the revised version, as part of the problem description.\\n\\n**Response to Typos:** We sincerely thank Reviewer bHDY for the detailed suggestions! We have addressed these in our revised version as follows: (a) Fix typos like inverted brackets in (13), (38), and (109), as well as \\\"removing\\\" $\\\\rightarrow$ \\\" removing \\\" and \\\"overparamterized\\\" $\\\\rightarrow$ \\\"overparameterized.\\\"\\n(b) Update Table 1 header to \\\"FT (Fine-Tuning) Method.\\\"\\n(d) Define overparameterized linear regression $(n \\\\ll d)$ when it is introduced in Section 2.\\n(e) Introduce mathematical notation for RL and UL when they are first defined.\\n(f) Add a reference to the proof in the appendix after Theorem 3.2.\\n(g) Adjust the font size in Figure 1 to improve readability.\\n(h) Introduce the notation for a wrong label before Eq. (6).\\n(i) Shorten the caption for Table 2 by removing the line about evaluation metrics to save space.\\n\\n**Response to Question 1:** Thank you for the question! We also observe a tendency for the unlearning loss to decrease as the fraction of overlapping features increases. This is an intuitive conclusion, as overlapping features likely play a key role in retaining shared information. However, under the current problem setup, we cannot directly prove this observation, which is why it is not discussed in detail in the paper. We believe this phenomenon may be influenced by other factors, such as the importance of the overlapping features. Exploring how to quantify these factors and their impact on unlearning dynamics will be a focus of future work. Thank you for your question!\\n\\n**Response to Question 2:**\\nThank you for the question! Based on our analysis, we favor the principle that regularization should prioritize remaining accuracy over unlearning accuracy, which supports constraining $\\\\alpha \\\\in$ $(0,1]$ for the unlearning term. While this specific range may not be explicitly mentioned in [1], we reviewed their code and directly consulted the authors, who confirmed that $\\\\alpha \\\\in(0,1]$ is the range they used in their implementation.\\n\\n**Response to Question 3:**\\nThank you for the question! There are many evaluation metrics in the unlearning problem. However, we prioritize the unlearned model's behavior to closely approximate the retrained (golden) model. Therefore, the metrics we consider are not limited to UA, RA, and TA. \\n\\nThank you again for your careful review and helpful feedback!\", \"reference\": \"[1] Chongyu Fan, Jiancheng Liu, Yihua Zhang, Dennis Wei, Eric Wong, and Sijia Liu. Salun: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation. arXiv preprint arXiv:2310.12508, 2023.\"}", "{\"comment\": \"I thank the author for promptly analyzing the new fine-tuning objective and for responding to all my comments.\\n\\n**Regarding gap between $ L( \\\\mathbf{w}_g, D_f ) $ and $ L( \\\\mathbf{w}_t, D_f ) $ in Addressing Note A**\\n\\nThanks for the clarification. There would be a significant gap in the $d_{\\\\text{lap}} = d$ case only if $ \\\\mathbf{P} $ and $ \\\\mathbf{P}_r $ are significantly different, which is straightforward with non-overlapping features, but the existence of such a difference is a bit more non-trivial with overlapping features (especially when all features are overlapping). If $ \\\\mathbf{ P } \\\\approx \\\\mathbf{ P }_r $, then the gap might not be statistically significant.\\n\\n**Regarding Addressing Note B with distinct features**\\n\\nI have not verified the derivation, but assuming that they are correct, one would make the following conclusions:\\n- In the case where the constraint (ii) is inactive, the forgetting quality of fine-tuning matches the forgetting quality of the golden model. This is the opposite conclusion to the one in Theorem 3.2.\\n- There is a difference in the retaining accuracy of the fine-tuned and golden model (again counter to Theorem 3.2, where we see that the RL matches between fine-tuning and golden). However, this relies on the difference between the projection $\\\\mathbf{P}_t$ of the fine-tuning set $D_t$ and the projection $\\\\mathbf{P}_r$ of the full retained set $D_r$ where $D_t \\\\subset D_r$. Assuming that the data is i.i.d. (which is standard), unless $D_t$ is dramatically smaller than $D_r$ (or adversarially sampled), the $\\\\mathbf{P}_t$ and $\\\\mathbf{P}_r$ should not be that different, and thus the gap in retained accuracy should not be that large.\\n- When the constraint (ii) is active, it is expected that there would be a gap as I had mentioned although I would want to understand derivations of the posted results.\\n\\nSo, with this formulation for fine-tuning (different than the one in (2)), it seems that fine-tuning is statistically similar to the golden model both in forgetting and in retained accuracy as long as\\n- (i) the projection $\\\\mathbf{P}_t$ of the subset $D_t$ is not significantly different than the projection $\\\\mathbf{P}_r$ of the full set $D_t$ (which should be the case with large enough fine-tuning set), and\\n- (ii) we allow the fine-tuning to be sufficiently thorough (that is, we allow $\\\\epsilon$ to be large around).\\n\\nTo me, this is a very different conclusion showing that fine-tuning **can work** if we solve the right fine-tuning problem without any need for having unlearning accuracy as a regularization. This conclusion is very different from the conclusion drawn regarding fine-tuning as defined in (2), which is then used by the authors to motivate the use of unlearning accuracy in the fine-tuning process. This is my main reason for saying that the formulation of the fine-tuning problem significantly affects the conclusions we are drawing here, which makes the overall theoretical message unclear.\\n\\nOf course, practically utilizing some form of unlearning loss (alongside loss on the retained set) during unlearning has been seen to be extremely useful. However, if one formulation of fine-tuning tells us that we need to include some unlearning loss, while a slight (but significant) modification of the fine-tuning objective tells us that fine-tuning matches golden if some conditions (independent of any unlearning loss) are satisfied, then it tells me that there is something in this theoretical framework we are not properly understanding.\"}", "{\"comment\": \"**Response to M2:**\\n\\nWe respectfully disagree with the assertion that our results are an artifact of our problem formulation. Our study focuses on class-wise unlearning with distinct features, a common scenario in classification tasks where different classes possess unique attributes. In the following, we provide practical relevance of distinct features:\\n\\n**Example with CIFAR-10 Dataset:**\", \"car_class\": \"Features like wheels and headlights.\", \"bird_class\": \"Features like wings and feathers.\\n\\nIn such datasets, the features associated with different classes are distinct. Our analysis is designed to reflect this practical aspect, where unlearning a class involves removing specific, non-overlapping features. This justifies our focus on cases where the forget and retain datasets have distinct features.\\n\\n**Regarding Full Feature Overlap:**\\n\\nWe acknowledge that when $d=d_{\\\\text {lap }}$, the forget and retain datasets share all features, however, it resembles random data unlearning. This scenario is not the primary focus of our work, as it does not align with the classwise unlearning framework we investigate. Our primary interest lies in situations where classes can be distinguished by their unique features a common occurrence in real-world applications.\\n\\n**Response to M3:**\\n\\nWe acknowledge that setting the weight parameter $\\\\alpha=0$ in our objective function effectively reduces our method to naive fine-tuning, focusing solely on retaining accuracy without explicitly addressing unlearning. However, our analysis and experimental results aim to provide more validation for our previous analysis. Our experiments demonstrate that placing more weight on retaining accuracy indeed improves performance on the forget dataset without compromising retaining effectiveness. While naive fine-tuning serves as a baseline case, it can not provide insights into the trade-off between unlearning and retaining with only one objective.\\n\\n**Response to Minor:**\\n\\nThank you for your insightful comment. You are correct that, theoretically, minimizing the crossentropy loss $\\\\text{CE}(p \\\\mid q)$ over $q$ is equivalent to minimizing the KL-divergence $\\\\text{KL}(p \\\\mid q)$ over $q$, since the entropy term is constant with respect to $q$.\\n\\nHowever, in our implementation, we observed practical differences due to how these loss functions are handled in machine learning libraries. Specifically, cross-entropy loss functions typically expect logits (unnormalized scores) and internally apply softmax, while KL-divergence functions often require probabilities or log probabilities as inputs. Additionally, some implementations compute $\\\\text{KL}(q \\\\mid p)$ instead of $\\\\text{KL}(p \\\\mid q)$, affecting the optimization direction.\\n\\nThese implementation details can lead to different optimization behaviors, even when using the same ground-truth distribution $p$. Therefore, while the theoretical objectives are equivalent, practical differences arise due to the specifics of function implementations and input requirements.\\n\\nWe appreciate your observation and will clarify this point in our revised manuscript to better reflect the theoretical equivalence and practical considerations!\\n\\nThank you again for your continued engagement with our work and the thoughtful feedback you've provided!\"}", "{\"comment\": \"Thank you once again for your initial review! We have carefully addressed your comments and provided a detailed response. As the discussion period deadline approaches, we would greatly value your feedback on our response.\"}", "{\"comment\": \"I thank the authors for the updated manuscript and the point-by-point responses. Based on reading the revised manuscript, I continue to struggle with some of the core conceptual aspects of the paper:\\n\\nIt is a bit counterintuitive that the new \\\"regularization\\\" is \\\"prioritizing retaining accuracy over unlearning accuracy\\\" in fine-tuning based unlearning. To the best of my understanding, this is something that already happens in fine-tuning as fine-tuning literally just optimizes for the retaining accuracy with no regards to the unlearning accuracy (as also defined in (2) except for the $\\\\mathbf{w}_o$ term in the loss $|| \\\\mathbf{w} - \\\\mathbf{w}_0 ||^2$). So fine-tuning is already only prioritizing \\\"retaining accuracy\\\" in the most extreme sense, and still is unsuccessful in unlearning (as shown in section 3).\\n\\nThus, the main message I take away from the (re)reading this paper is the following:\\n- **M1:** (section 3) Fine-tuning (as defined in (2)) is not successful in unlearning as there is a significant gap between the golden model $\\\\mathbf{w}_g$ and the fine-tuned model $\\\\mathbf{w}_t$ (although there are some caveats; see **Note A** and **Note B**). \\n- **M2:** (section 4) In the overlapping setup, it is better to leave the overlapping feature weights (instead of resetting them) as this provides the best tradeoff between unlearning loss and retaining loss among all two ways to \\\"regularize\\\" (reset) the weights for fine-tuning. However, in the (pratically usual) case where $d_{\\\\text{lap}} = d$, this regularization is a no-op as there is nothing to be done before fine-tuning, and we are back to basic fine-tuning (except that we are solving a specific instance of fine-tuning defined in (2)). So effectively, the best regularization is to just do fine-tuning.\\n- **M3:** (section 5) This message is reiterated with the experiments where the authors show that putting more weight on the retaining accuracy is better for unlearning/utility tradeoff. However, again, note that putting all the weight on the retain accuracy is the usual fine-tuning based unlearning.\\n\\n\\n**Note A:** The $d_{\\\\text{lap}} = d$ case is not clear. Given $\\\\mathbf{w}\\\\_\\\\* = \\\\mathbf{w}\\\\_\\\\*^{\\\\text{lap}}$ when $d_{\\\\text{lap}} = d$, let the original problem (1) solution be $\\\\mathbf{w}\\\\_o = \\\\mathbf{w}\\\\_\\\\*$. Since $y\\\\_r = X\\\\_r^\\\\top \\\\mathbf{w}\\\\_\\\\*$, $\\\\mathbf{w}\\\\_t = \\\\mathbf{w}\\\\_\\\\*$ with $L(\\\\mathbf{w}\\\\_t, D\\\\_r) = 0$ since this value would minimize $||\\\\mathbf{w}\\\\_o - \\\\mathbf{w} ||^2$. Also $L(\\\\mathbf{w}\\\\_t, D\\\\_f) = 0$ so no forgetting happens. However, $\\\\mathbf{w}\\\\_g = \\\\mathbf{w}\\\\_\\\\*^{\\\\text{lap}} = \\\\mathbf{w}\\\\_\\\\* = \\\\mathbf{w}\\\\_t$. So $L(\\\\mathbf{w}\\\\_g, D\\\\_r) = 0$ but also $L(\\\\mathbf{w}\\\\_g, D\\\\_f) = L(\\\\mathbf{w}\\\\_*, D\\\\_f) = L(\\\\mathbf{w}\\\\_t, D\\\\_f) = 0$, so the golden unlearned model also has low unlearning loss and no gap exists between the golden and the fine-tuned model. So it is not clear when and where the mentioned gap would exist.\\n\\n**Note B:** It seems to me that the results in section 3 regarding the gap between $L(\\\\mathbf{w}\\\\_t, D\\\\_f)$ and $L(\\\\mathbf{w}\\\\_g, D\\\\_f)$ stems from the formulation of the objective $\\\\arg\\\\min || \\\\mathbf{w} - \\\\mathbf{w}\\\\_o ||^2$ for fine-tuning in (2) involving the original training solution $\\\\mathbf{w}\\\\_o$. This makes sense given that the fine-tuning process initiates from $\\\\mathbf{w}\\\\_o$. However, this formulation seems critical to have the gap between fine-tuning and golden model. Since the golden model minimizes the norm $|| \\\\mathbf{w} ||^2$ in (3), it drops the disentangled $\\\\mathbf{w}\\\\_\\\\*^f$ part of the weights. In contrast, the fine-tuning solution in (2) is limited to be a minimal distance solution with the $|| \\\\mathbf{w} - \\\\mathbf{w}\\\\_o ||^2$ term, thereby forcing the $\\\\mathbf{w}\\\\_t$ to keep the $\\\\mathbf{w}\\\\_\\\\*^f$ component of the weights. To me, this is not an inherent behaviour of fine-tuning based unlearning, but rather an artifact of the fine-tuning problem formulation. Instead if the fine-tuning problem (2) was formulated as following where we again look for the lowest-norm solution while being close to the initial $\\\\mathbf{w}\\\\_o$, it is not clear if the results would continue to hold\\n\\n$$ \\\\arg \\\\min_{\\\\mathbf{w}} || \\\\mathbf{w} ||^2 \\\\text{ s. t. } \\\\text{(i)} y_t = X_t^\\\\top \\\\mathbf{w}, \\\\text{(ii)} || \\\\mathbf{w} - \\\\mathbf{w}_o ||^2 \\\\leq \\\\epsilon.$$\\n\\nIn this case, the preservation of the $\\\\mathbf{w}\\\\_\\\\*^f$ component in the $\\\\mathbf{w}\\\\_t$ is no longer ensured, and the unlearning loss gap between $L(\\\\mathbf{w}\\\\_t, D\\\\_f)$ and $L(\\\\mathbf{w}\\\\_g, D\\\\_f)$ would probably depend on $\\\\epsilon$ in constraint (ii), which corresponds to how thorough we allow the fine-tuning (using the retain loss) to be. If $\\\\epsilon$ is large enough that $|| \\\\mathbf{w}_o - \\\\mathbf{w}_g ||^2 \\\\leq \\\\epsilon$, then fine-tuning should find $\\\\mathbf{w}\\\\_g$.\"}" ] }
CGbfokGFP7
3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive Diffusion
[ "Zhaoxi Chen", "Jiaxiang Tang", "Yuhao Dong", "Ziang Cao", "Fangzhou Hong", "Yushi LAN", "Tengfei Wang", "Haozhe Xie", "Tong Wu", "Shunsuke Saito", "Liang Pan", "Dahua Lin", "Ziwei Liu" ]
The increasing demand for high-quality 3D assets across various industries necessitates efficient and automated 3D content creation. Despite recent advancements in 3D generative models, existing methods still face challenges with optimization speed, geometric fidelity, and the lack of assets for physically based rendering (PBR). In this paper, we introduce 3DTopia-XL, a scalable native 3D generative model designed to overcome these limitations. 3DTopia-XL leverages a novel primitive-based 3D representation, PrimX, which encodes detailed shape, albedo, and material field into a compact tensorial format, facilitating the modeling of high-resolution geometry with PBR assets. On top of the novel representation, we propose a generative framework based on Diffusion Transformer (DiT), which comprises 1) Primitive Patch Compression, 2) and Latent Primitive Diffusion. 3DTopia-XL learns to generate high-quality 3D assets from textual or visual inputs. We conduct extensive qualitative and quantitative experiments to demonstrate that 3DTopia-XL significantly outperforms existing methods in generating high-quality 3D assets with fine-grained textures and materials, efficiently bridging the quality gap between generative models and real-world applications.
[ "3D Generation", "Diffusion Model", "Image-to-3D", "Text-to-3D", "PBR Asset", "3D Representation", "Primitives" ]
https://openreview.net/pdf?id=CGbfokGFP7
https://openreview.net/forum?id=CGbfokGFP7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wV4tYIzAJt", "tYyycbMx63", "j3CRvWo3cE", "XCcXjz1rNj", "9HdJDbalGx" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730637656525, 1731656510079, 1729871435734, 1729784468831, 1730413679119 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1450/Reviewer_8JMj" ], [ "ICLR.cc/2025/Conference/Submission1450/Authors" ], [ "ICLR.cc/2025/Conference/Submission1450/Reviewer_1Sfy" ], [ "ICLR.cc/2025/Conference/Submission1450/Reviewer_XGY3" ], [ "ICLR.cc/2025/Conference/Submission1450/Reviewer_zK5B" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces 3DTopia-XL, a 3D generative model designed to create 3D assets from both images and text. It extends 3D representation from M-SDF to support color and PBR materials, encoding shape, albedo, and material fields into a compact tensor format. The model framework, built on the Diffusion Transformer (DiT), incorporates Primitive Patch Compression and Latent Primitive Diffusion. Extensive experiments show that 3DTopia-XL effectively generates convincing 3D assets from both textual and visual inputs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Strengths:\\n1\\uff09It extends the 3D representation from M-SDF to support color and PBR materials and integrates it into the generative framework, enabling the generation of 3D assets containing PBR materials from text and image conditions. This introduces a new approach to 3D representation for native 3D generative tasks. \\n2\\uff09Ablation studies on the number and resolution of primitives validated the selection of resolution. \\n3\\uff09The presentation of the paper is good.\", \"weaknesses\": \"Weakness\\uff1a\\n1\\uff09My concern is regarding the accuracy of the PBR materials. The fourth row in Figure 7 shows noticeably incorrect PBR materials. The paper claims to generate 3D assets with PBR textures, so it should include experiments on a 3D synthetic dataset where the ground truth metallic and roughness data are available. For example, quantitative evaluations can be handily conducted on a synthetic dataset with ground truth PBR data using PSNR or SSIM for albedo, metallic, and roughness maps.\\n2\\uff09As mentioned in line 292, the paper introduces patch-based compression aimed at incorporating inter-channel correlations between geometry, color, and materials. However, there is no experimental evidence to support this. As shown in Table 3, comparing line 1 and line 2, when the grid compression rate remains the same, reducing the feature dimensions representing different geometry, color, and materials from 6 to 1 actually results in a decrease in PSNR. Authors could provide additional experiments or analyses that directly demonstrate the benefits of incorporating these correlations, such as comparing the proposed approach to a baseline that processes geometry, color, and materials independently.\\n3\\uff09It seems that the experimental results are missing an ablation study where patch-based compression is not applied. It would be even better if qualitative comparison results could be added. The author could include an ablation study that compares their patch-based compression approach to a baseline without compression, showing both quantitative metrics and qualitative visual comparisons.\\n4\\uff09The current generated results lack diversity. It is better to include quantitative measures of diverse objects with complex topologies, such as challenging chairs, plants, and buildings, \\n\\nIn summary, I have concerns about the novelty of the paper. The 3D representation primarily extends existing work on M-SDF, and the performance of the extended features, such as PBR material, is not satisfactory. Additionally, despite the use of patch-based compression, the reported computational cost is substantial, requiring 16 nodes of 8 A100 GPUs around 14 days to converge. In comparison, InstantMesh only needs 8 NVIDIA H800 GPUs. Moreover, compared with instant mesh, the generated geometric results in this paper tend to be overly smooth, losing geometric details, as shown in Figure 5.\", \"questions\": \"Questions\\uff1a\\nSee weakness 1, 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": [\"This paper introduces a 3D Physically-Based Rendering (PBR) assets generation method utilizing an expressive and efficient 3D representation, dubbed PrimX.\", \"The method leverages two main techniques: Primitive Patch Compression and Latent Primitive Diffusion, effectively balancing generation speed with quality, achieving PBR asset creation within just 5 seconds of denoising.\", \"The proposed method consistently outperforms baseline methods in text-to-3D and image-to-3D settings.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**1.** The paper is well-motivated, effectively addressing the PBR rendering challenges for 3D asset generation.\\n\\n**2.** The design of the model and its pre-processing pipeline (including \\\"Mesh to PrimX\\\", Prim Patch Encoder, and Primitive Diffusion) are well-conceived.\\n\\n**3.** The paper is well-written and provides extensive qualitative results to demonstrate its superiority (e.g., high geometric fidelity of PBR assets). The authors also include comprehensive ablation study and user study, which confirm the effect of each component and highlight the model's performance.\", \"weaknesses\": \"**1.** The primary concern is the $\\\\textcolor{blue}{\\\\text{apparent similarity}}$ between PrimX (the core design of this work) and PrimDiffusion (Chen et al., 2023b), which is scarcely mentioned, even in the \\\"difference with related work\\\" ( **Sec.** A.1.1). The authors should clarify the fundamental differences between two methods and prove (or clarify) why extending 3D **Primitives** to **PrimX** which including the PBR attribute (Material $\\\\in \\\\mathbb{R}^{a^{3}\\\\times 2}$) is non-trivial.\\n\\n**2.** Although the authors provide geometric and texture reconstruction results (e.g., **Fig.** 1, **Fig.** 5, and **Fig.** 7), they only show the predicted RGB images (texture) from the front view which could potentially be directly derived from the input image. The authors should showcase texture rendering from diverse viewpoints, including side and $\\\\textcolor{blue}{\\\\textbf{back views}}$, to better validate the model's capabilities.\\n\\n**3.** Compared to cutting-edge 3D generation works (e.g., Real3D, LGM, CRM, InstantMesh), 3D Topia-XL requires substantial training resources and time, taking approximately **128** NVIDIA A100 GPUs and **14** days to converge. Apart from the mentioned \\\"v-prediction\\\", could pre-trained 2D models or other training strategies help accelerate convergence? How can the authors reduce training costs to democratize the proposed 3D generative model?\\n\\n**4.** In the Image-to-3D setting (**Sec.** 4.2), 3DTopia-XL achieves overwhelming superiority in PBR rendering due to baseline methods' incapability to support spatially varied materials. It may be beneficial for the authors to render objects without reflections in a fixed and simplified lighting environment, then quantitatively compare with other methods on a common GSO dataset and report PSNR, SSIM, LPIPS, and CD metrics.\\n\\nIf the authors could address my concerns by providing corresponding quantitative or qualitative results based on **the weaknesses** and **review feedback**, I will consider improving my score.\", \"questions\": [\"After generating the latent PrimX via latent primitive diffusion (**Fig.** 3), is it necessary to convert PrimX to mesh representations for the generation of 3D PBR assets in practical applications?\", \"In **Sec.** A.2.4, the runtime of the \\\"PrimX to Mesh\\\" process (which includes UV unwrapping, Rasterizer, and Marching Cubes) is missed.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a method for 3D shape and PBR texture generation. The key to the method is the proposed representation PrimX \\u2014 each object is represented by a set of primitive cubes on the surface, with each primitive containing position, scale, and volumetric SDF and PBR values, thus forming a tensorized representation. This representation can be converted to and from textured meshes, and compressed into a more compact latent space using a primitive-level VAE. A DiT is trained on this latent space for conditioned 3D generation. The authors conducted some comparisons and ablations to demonstrate the superiority of the method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The writing is clear and easy to follow.\\n2. Presented a novel representation for 3D object generation.\\n3. The proposed method can also generate PBR.\\n4. Thorough ablations are done regarding the hyper-parameters of the proposed method.\\n5. The scaling result of the method shows its potential to further increase quality by increasing data and model size.\", \"weaknesses\": \"1. Discussion and comparison to sparse-voxel based generation methods are omitted, including but not limited to:\\n 1. Generalized Deep 3D Shape Prior via Part-Discretized Diffusion Process, CVPR 2023\\n 2. SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation, CVPR 2023\\n 3. Locally Attentional SDF Diffusion for Controllable 3D Shape Generation, SIGGRAPH 2023\\n 4. One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion, CVPR 2024\\n 5. Make-A-Shape: a Ten-Million-scale 3D Shape Model, ICML 2024\\n 6. XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies, CVPR 2024\\n 7. MeshFormer: High-Quality Mesh Generation with 3D-Guided Reconstruction Model, NeurIPS 2024\\n\\n These methods are also parameter-efficient thanks to sparsity.\\n2. The shape and texture generation quality is lower than sota methods like CLAY, MeshLRM, and MeshFormer. The PBR also lacks high-frequency details.\\n3. Factual error in A.1.1: vecset can be extended to store color or PBR properties, by adding these properties along with PE in the encoding stage and decoding them by querying with positions. They are also differentiable renderable when combined with a differentiable marching cube algorithm and a differentiable rasterizer.\\n4. 4.2 Fig. 5 comparison with other methods: in the qualitative result part, all the meshes are not aligned, placed at different locations, and with different scales. Also, quantitative results on the image-to-3D task besides user studies are not reported.\\n5. Although claimed in the abstract, no real-world test cases are shown.\", \"questions\": \"1. The proposed method does not need differentiable rendering. It is also unsure how to train a latent 3D diffusion model using the so-called direct 2D supervision. That said, the claim of the possibility to learn from 2D images using PrimX is over-claimed.\\n2. How are the primitives ordered for a single object during the diffusion process? Will the order affect generation quality?\\n3. What is the advantage of using a local VAE instead of a global one, does it bring better quality? Or is it just cheaper to train? If you can train a DiT of ~1B, training a global VAE shouldn't be a big burden. The authors should provide this additional ablation on the VAE.\\n4. Although the authors claimed to be \\u201crapidly tensorizable\\u201d, the speed for texture mesh to PrimX is actually very slow, as each shape takes 1.5 min for the conversion. I wonder how long it takes to process all these 250k training objects.\\n5. 4.1 representation evaluation: why not compare with sparse voxel and vecset? Also, what if the number of params of all methods is increased, like 5M and even 25M? Will the proposed representation still have privilege? The proposed representation also failed to reproduce texture details, e.g. the eyes in Fig 4. Can this problem be alleviated with more parameters? Besides, I think a more valuable comparison would be fixing the number of parameters in the encoded/latent space, as this is where the diffusion model runs.\\n\\nGiven the limited improvement over existing methods, the untenable motivation, and the missing evaluations, I lean to reject this paper. However, I'm willing to raise my score if the authors provide justifications for my above questions, and add missing results as listed in the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"3DTopia-XL is a high-quality 3D generative model designed to meet the growing demand for efficient 3D asset creation in fields such as gaming, virtual reality, and film. It introduces an innovative representation, PrimX, which encodes complex shapes, textures, and materials into a compact tensor format, supporting Physically Based Rendering (PBR) for realistic visuals. The model employs a Diffusion Transformer framework that facilitates efficient 3D asset generation from text or image inputs through its unique Primitive Patch Compression and Latent Primitive Diffusion techniques. Additionally, 3DTopia-XL includes optimized algorithms for extracting detailed PBR assets, ensuring easy integration into graphics engines. Experiments demonstrate that 3DTopia-XL significantly outperforms existing methods in producing high-resolution, finely detailed 3D assets, making it a promising foundation for advanced 3D generative modelling applications.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces a novel tensor-based 3D representation called PrimX, which efficiently encodes geometry, albedo, and material properties. This representation allows for high-quality, physically-based rendering (PBR) assets with smooth geometry and intricate texture details, offering a more compact and efficient solution compared to traditional methods.\\n\\n2. The proposed 3DTopia-XL model leverages a Diffusion Transformer to perform 3D generation from textual or visual inputs. This framework, combined with PrimX, enables high-quality and large-scale 3D asset generation, accommodating complex tasks like image-to-3D and text-to-3D conversion with better quality and efficiency compared to existing approaches.\\n\\n3. The paper provides a detailed and efficient method for converting the PrimX representation back into a textured mesh in GLB format. This process ensures high-quality asset extraction by applying techniques like UV unwrapping, dilating, and inpainting, making the output ready for various downstream applications in graphics engines, with minimal additional processing required.\", \"weaknesses\": \"1. In Eq. 3, v is defined as a set of N volumetric primitives; however, it is not clear what each primitive denotes.\\n\\n2. It\\u2019s not immediately clear what N and D represent. Are these dimensions related to the number of primitives and data channels (e.g., geometry and texture details), or is there another interpretation? Further defining these terms would add clarity.\\n\\n3. The paper mentions using initialized positions and scales, where I is a unit local voxel grid. However, it\\u2019s not entirely clear how these initialized positions and scales contribute to the overall volumetric representation in the PrimX model. More context on what this setup accomplishes geometrically would help clarify this step.\\n\\n4. Unclear about the role of permutation equivariance in PrimX and its impact on the Transformer model's design. Clarifying how permutation equivariance inherently maintains structure among primitives or why it aligns well with Transformer processing would be helpful.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
CGT0T9uUOY
Object-aware lifting for 3D scene segmentation in Gaussian splatting
[ "Runsong Zhu", "Shi Qiu", "Zhengzhe Liu", "Ka-Hei Hui", "Qianyi Wu", "Pheng-Ann Heng", "Chi-Wing Fu" ]
Lifting is an effective technique for producing a 3D scene segmentation by unprojecting multi-view 2D instance segmentations into a common 3D space. Existing state-of-the-art lifting methods leverage contrastive learning to learn a feature field, but rely on a hyperparameter-sensitive and error-prone clustering post-process for segmentation prediction, leading to inferior performance. In this paper, we propose a new unified \textit{object-aware lifting} approach in a 3D Gaussian Splatting field, introducing a novel learnable \textit{object-level codebook} to account for objects in the 3D scene for an explicit object-level understanding. To start, we augment each Gaussian point with an additional Gaussian-level feature learned using a contrastive loss. More importantly, enabled by our object-level codebook formulation, we associate the encoded object-level features with Gaussian-level point features for segmentation predictions. Further, we design two novel modules, the association learning module and the noisy label filtering module, to achieve effective and robust codebook learning. We conduct experiments on three benchmarks,~\ie, LERF-Masked, Replica, and Messy Rooms datasets. Both qualitative and quantitative results manifest that our new approach significantly outperforms the existing methods in terms of segmentation quality and time efficiency.
[ "3D-GS", "3D scene segmentation", "Lifting" ]
https://openreview.net/pdf?id=CGT0T9uUOY
https://openreview.net/forum?id=CGT0T9uUOY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "dR3FMtQoDx", "Az6G2DaLnf", "AJYo8j5o9e", "4y8p0QHeXA", "2gmSMsQxHA" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730482661505, 1731487730841, 1730539197015, 1730623677963, 1730667930352 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1373/Reviewer_A1ye" ], [ "ICLR.cc/2025/Conference/Submission1373/Authors" ], [ "ICLR.cc/2025/Conference/Submission1373/Reviewer_beAN" ], [ "ICLR.cc/2025/Conference/Submission1373/Reviewer_tvsX" ], [ "ICLR.cc/2025/Conference/Submission1373/Reviewer_ce3Y" ] ], "structured_content_str": [ "{\"summary\": \"While this paper proposes a 3D object segmentation method using a 3D Gaussian Splatting (3D-GS) framework to align 2D Segment Anything Model (SAM) masks into 3D space, the approach ultimately fails to adequately address critical issues that impact its potential contribution to the field.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is clearly structured, presenting its methodology in a logical progression. It introduces an object-level codebook and supplementary modules aimed at eliminating clustering post-processing steps, which are known to be hyperparameter-sensitive. Additionally, the paper includes experimental results on three datasets, LERF-Masked, Replica, and Messy Rooms, demonstrating some improvements in segmentation consistency and computational efficiency within the scope of these datasets over selected baselines.\", \"weaknesses\": \"1. Lack of Clarity and Detail: Key aspects of the method are inadequately explained, notably the association learning and noisy label filtering modules, which are limited to brief descriptions. This lack of detail impedes a clear understanding of the method\\u2019s workings and its practical applicability. Furthermore, there is no discussion on whether image constraints were utilized for reconstruction, nor are any re-rendering metrics presented to assess quality, limiting confidence in the segmentation outcomes.\\n2. Motivation and Novelty: While the paper introduces an association learning module and noisy label filtering module, these elements lack significant novelty in the broader context of 3D segmentation. Similar techniques, such as object-level codebooks, have already been leveraged by prior work, including Object-NeRF and DM-NeRF, which are not sufficiently discussed. Without a thorough comparison with these methods, it remains unclear how this approach substantially advances beyond existing techniques.\\n3. Technical Complexity and Innovation: The technical contributions are minimal. The association learning module and noisy label filtering module employ approaches already prevalent in segmentation literature. The sole modifications appear in Equations (6) and (7), with limited impact demonstrated in the experimental results (Table 3) compared to baseline methods like Panoptic Lifting and Contrastive Lift. This limited innovation detracts from the potential significance of the proposed method.\\n4. Incomplete Experimental Validation: The paper lacks key experiments necessary to assess the proposed method\\u2019s effectiveness. Notably, NeRF-based lifting methods were not evaluated on the Replica and LERF-Masked datasets, which limits the comparison of this approach with established baselines. Furthermore, the robustness of segmentation across different SAM mask granularities is unexamined, leaving uncertainty regarding the method\\u2019s adaptability to variations in segmentation granularity. Essential metrics, such as F-score calculations in 2D versus 3D point clouds and segmentation accuracy across different viewing angles, are also absent. These omissions hinder a comprehensive understanding of the method's applicability across diverse scenes and conditions. Additionally, Table 4 provides a breakdown of comparative experiments on various components of the design, yet it does not specify which dataset these results were obtained from, further limiting the interpretability of the findings.\", \"questions\": \"1. Could you clarify whether the F-Score reported in the paper is calculated on the 2D segmentation maps or on the 3D point clouds?\\n2. Beyond the copy-and-paste operations for scene editing, does your method support rigid transformations or deformation operations on segmented objects? Additionally, are you able to validate the re-rendered scenes with quantitative comparisons and assess the geometric accuracy of these edits?\\n3. In Figure 6(b), View 3 appears to show the original image. Could you provide additional segmentation quality results from larger viewing angles or greater distances from the initial position? This would help assess whether the method can accurately segment Gaussian spheres, especially along object edges.\\n4. It appears that only 64 images were used as training data for the Replica dataset. If this is correct, could you explain the rationale behind choosing this limited dataset size?\\n5. In Table 4, comparative experiments are provided for different components of your design. However, the dataset used for these results is not specified. Could you clarify which dataset was used for the experiments in Table 4?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"Accurate 3D scene segmentation enhances scene understanding and scene editing tasks, unlocking numerous downstream applications in AR/VR, autonomous vehicles, and robotics. Lifting features from 2D instance/semantic segmentation models to recent 3D representations such as NeRF and Gaussian Splatting (3DGS) is a popular technique for performing 3D scene segmentation.\", \"existing_state_of_the_art_lifting_methods_have_the_following_issues\": [\"Works like Panoptic-Lifting use linear assignment + classification loss to learn 3D segmentations, but these learned representations lack semantic meaningfulness (L42-44)\", \"Works like Gaussian Grouping and GAGA utilize association techniques as a preprocessing step for view consistency. However, the preprocessing step can produce inaccurate results (L46-47).\", \"Works like Contrastive-Lift do not require preprocessing steps and encode instance information in a feature field, which is optimized using contrastive losses. However, they require a post-processing step like HDBSCAN to predict the final instance segmentation masks. (L48-49)\", \"This work proposes a unified lifting framework for 3D instance segmentation, which does not require any preprocessing or post-processing steps. Contributions of this work are summarized as follows:\", \"They propose a unified framework for accurate 3D instance segmentation by introducing an object-level codebook representation.\", \"To train the proposed codebook effectively, they present novel association learning and noise filtering modules.\", \"The proposed method achieves SOTA on public benchmarks.\"], \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. **Paper-Writing and Presentation**: The overall quality of the paper, including the clarity of writing and the presentation of figures, was excellent. The well-structured format and coherent flow made the paper easy to read and navigate.\\n\\n2. **Clear Motivation and Shortcomings of the existing methods**: The authors provide a clear and thorough explanation of the limitations of current methods, as detailed in L40-53 and illustrated in Figure 1. This explanation establishes a strong context and rationale for the proposed problem statement.\\n\\n3. **Technical novelty**: The proposed object-level codebook represents an innovative technical contribution that reduces the reliance on post-processing steps. To enable effective training of this codebook, the authors introduce three key components: area-aware ID mapping, a concentration term, and a noisy label filtering module. These enhancements improve the performance of the final method as described in Ablation studies in Table 4. The significance of these contributions is substantial, and their potential impact warrants sharing them with the broader research community to advance the field. \\n\\n4. **Beats SOTA on benchmark datasets**: The authors evaluate their proposed method using three well-known datasets: Replica, LeRF, and Messy Rooms. Across all these benchmarks, the method demonstrates state-of-the-art (SOTA) performance, underscoring its effectiveness. Additionally, qualitative results highlight the method's ability to maintain multi-view consistency.\", \"weaknesses\": \"1. **Training time comparison with SOTA methods such as Gaussian Grouping**: The authors discuss training time in Table 3; however, a comparison with the Gaussian-Grouping method is not included. To further improve the quality of the manuscript, the authors should provide a detailed breakdown of the training time for their proposed method.\\n\\n2. **Inconsistency in the metrics used in the paper**: $PQ^{scene}$ is a scene-level extension of standard Panoptic Quality (PQ) that takes into account the consistency of instance IDs across views/frames (aka tracking). This metric is reported only for the Messy Rooms dataset and is not provided for the LERF-Mask or Replica datasets. I would recommend that the authors address this during the rebuttal phase. \\n\\n3. **Missing Results on novel-view synthesis task**: It is essential to report PSNR, SSIM and LPIPS for the novel-view synthesis task for standard datasets. Check Table 1 in the Gaussian Grouping paper.\\n\\n4. **Missing Comparison on Scannet scenes**: Contrastive-Lift and Gaussian-Grouping report the results on the Scannet dataset as well. For the thoroughness of the experiments, I recommend that the authors address this during the rebuttal phase.\\n\\n5. **Robustness to the choice of instance segmentation method**: Currently, the proposed method employs Segment Anything Model (SAM). However, how is the performance affected when a different segmentation model, such as MaskFormer, is used? Is this a limitation of the proposed method?\", \"questions\": \"1. L42-44: Can authors clarify what they mean by \\\"lacks semantically meaningful instance features\\\"?\\n\\n2. L236: Can authors clarify what is the cause of this accumulated error? \\n\\n3. Typo in Eq. 2. It should be L instead of L-1 as the id of the first element starts from 1.\\n\\n4. Comparison on Messy Rooms Dataset: Please share the results for the Gaussian-Grouping method in Table 3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"The paper tries to solve the task of 3D scene segmentation using 2D multiview instance masks within a single method, avoiding any additional pre-processing or post-processing step.\", \"The method suggested by the author uses per-Gaussian level features, object level features and later use association functions to bring in mutual correlation between these features.\", \"The authors also propose a noisy label filtering module by estimating an uncertainty map for the segmentation masks.\", \"To show the effectiveness and scalability of the proposed method, the authors have reported quantitative numbers on popular datasets including LeRF-mask dataset and Replica and Messy rooms\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The gaussians in the proposed method get object-level understanding of the 3D scene by learning from the Gaussian-level features, rather than utilizing post-processing techniques like clustering which requires parameter tuning.\", \"Compared to other methods, the proposed pipeline claims to not overlook small objects.\", \"The paper is easy to follow.\"], \"weaknesses\": [\"The paper does not discuss regarding the extra memory overhead that would incur by introducing separate set of features at gaussian level and object level.\", \"The quantitative numbers in some of the tables seem spurious to me. For example the authors claim the mIou value of Gaussian grouping in replica is 23.6 but the original paper has reported 71.15. The authors have not reported the mIOU numbers of Contrastive Lift, which has reported 67.0 mIOU for replica.\", \"In Fig4 the authors claim that their method produces pseudo labels that are more view consistent to facilitate the codebook learning, but the pillows on the sofas do not have consistent masks. This is also against the claims that the proposed method does good on small objects in the scene.\", \"There are a bunch of typo and errors that needs to be addressed in the manuscript Example: Inconsistent tick mark in Fig 1 (c),\", \"Fig1 (\\u201cwhen\\u201d \\u2014> \\u201cwhere\\u201d).\"], \"questions\": [\"The authors claim that other methods lack semantically meaningful instance features. What would happen if we used L-Seg features or clip features instead of the codebooks for understanding the object features? Won't it help us query objects in the scene and improve interactivity? [Not a weakness just a question]\", \"How much extra time overhead does clustering based methods take? Is the proposed method saving significant time by avoiding pre-processing/ post-processing methods apart from finding hyper-parameters?\", \"The significant improvement in training time is mostly because of using 3DGS as a base representation. Do you think components from your pipeline can be used to improve existing SOTA NeRF based methods in similar tasks?\", \"Please correct me if you think I have misunderstood any aspect of the paper and address the Weakness section.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work proposes a framework for an object-aware segmentation mask lifting method for 3D Gaussian Splatting. A novel learnable object-level codebook is introduced to account for objects in the 3D scene, enabling explicit object-level understanding. Similar to previous works, a contrastive loss is also applied to regress a feature field for each Gaussian. With the codebook formulation, the encoded object-level features are associated with Gaussian-level point features for segmentation predictions. Experiments are conducted on the LERF-Masked, Replica, and Messy Rooms datasets for evaluation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Introduced a codebook formulation for 3D Gaussian Splatting segmentation lifting.\\n\\n2. Both qualitative and quantitative results demonstrate the effectiveness of this work.\", \"weaknesses\": \"1. Although the authors use the word \\\"novel\\\" multiple times in the abstract, the proposed work seems incremental when compared to previous clustering-based methods, such as Gaussian-grouping [1].\\n\\n2. As the main goal of this work is to lift 2D masks to masks for each 3D Gaussian, results on object removal in 3D scenes should be presented. Only projecting 3D segmentation masks back into 2D images is not convincing.\\n\\n3. More qualitative comparison with Gaussian-grouping [1] should be conducted, especially for downstream tasks, such as 3D removal, inpainting, and editing.\\n\\n4. Despite tremendous work on 3D Gaussian Splatting Segmentation, such as Gaga [2], Click-Gaussian [3], and FlashSplat [4], discussion on these recent works is still necessary to justify the setting of this work. In particular, FlashSplat highlighted that each 3D Gaussian may have multiple semantic labels, as they are shared between objects in rendering. In such a setting, the per-Gaussian features can be ambiguous, and enforcing each Gaussian to have only one semantic label in this work would be inherently unreasonable.\\n\\n\\n\\n[1] \\\"Gaussian grouping: Segment and edit anything in 3d scenes.\\\" ECCV 2024 \\n\\n[2] \\\"Gaga : Group Any Gaussians via 3D-aware Memory Bank\\\" Arxiv 2024\\n\\n[3] \\\"Click-Gaussian: Interactive Segmentation to Any 3D Gaussians\\\" ECCV 2024\\n\\n[4] \\\"FlashSplat: 2D to 3D Gaussian Splatting Segmentation Solved Optimally\\\" ECCV 2024\", \"questions\": \"please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
CGON8Btleu
BrainACTIV: Identifying visuo-semantic properties driving cortical selectivity using diffusion-based image manipulation
[ "Diego Garcia Cerdas", "Christina Sartzetaki", "Magnus Petersen", "Gemma Roig", "Pascal Mettes", "Iris Groen" ]
The human brain efficiently represents visual inputs through specialized neural populations that selectively respond to specific categories. Advancements in generative modeling have enabled data-driven discovery of neural selectivity using brain-optimized image synthesis. However, current methods independently generate one sample at a time, without enforcing structural constraints on the generations; thus, these individual images have no explicit point of comparison, making it hard to discern which image features drive neural response selectivity. To address this issue, we introduce Brain Activation Control Through Image Variation (BrainACTIV), a method for manipulating a reference image to enhance or decrease activity in a target cortical region using pretrained diffusion models. Starting from a reference image allows for fine-grained and reliable offline identification of optimal visuo-semantic properties, as well as producing controlled stimuli for novel neuroimaging studies. We show that our manipulations effectively modulate predicted fMRI responses and agree with hypothesized preferred categories in established regions of interest, while remaining structurally close to the reference image. Moreover, we demonstrate how our method accentuates differences between brain regions that are selective to the same category, and how it could be used to explore neural representation of brain regions with unknown selectivities. Hence, BrainACTIV holds the potential to formulate robust hypotheses about brain representation and to facilitate the production of naturalistic stimuli for neuroscientific experiments.
[ "brain", "selectivity", "visual cortex", "fMRI", "manipulation", "variation", "diffusion", "neuroscience" ]
Accept (Poster)
https://openreview.net/pdf?id=CGON8Btleu
https://openreview.net/forum?id=CGON8Btleu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sOFgFkytgs", "s2f0jlD45h", "o93Ajv6Rsv", "nxf4xFt7sW", "lSeET4VWBq", "kZxKDmKMZZ", "hCFWcq0tTv", "e3WtigqDe9", "btk1Jnks8O", "WycFfN1typ", "W0cPgYK67m", "UfTY1m9VTc", "T2lIRqYA2M", "RFhOoJ6eYk", "Q3Dv7HfbEM", "P1YrI58uBr", "HNJ332pKsQ", "EtMtIqBAZS", "6oeJ66KCrS", "6S4hj36G96", "5QV5D9CFVB", "4TBbsFIJmL", "1A5z7F6uGV" ], "note_type": [ "official_review", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730918445306, 1732568214933, 1729843571387, 1737524265319, 1730576125438, 1732591344081, 1732446433697, 1732445164525, 1732566570302, 1730517515595, 1730372959554, 1732446329396, 1732559559333, 1732780823978, 1732446717375, 1732444556885, 1732445771818, 1732444608576, 1732445551871, 1734472152322, 1732531201513, 1732771790131, 1732445234855 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13524/Reviewer_mur3" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Reviewer_5W1K" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13524/Reviewer_82Gz" ], [ "ICLR.cc/2025/Conference/Submission13524/Reviewer_5W1K" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Reviewer_Dc87" ], [ "ICLR.cc/2025/Conference/Submission13524/Reviewer_eXwK" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Reviewer_Dc87" ], [ "ICLR.cc/2025/Conference/Submission13524/Reviewer_5W1K" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Area_Chair_4Wp6" ], [ "ICLR.cc/2025/Conference/Submission13524/Reviewer_eXwK" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ], [ "ICLR.cc/2025/Conference/Submission13524/Authors" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the authors present a method for generating maximizing or minimizing images for a brain ROI using Stable Diffusion. The method uses the NSD dataset, and builds on previous work using this dataset and diffusion models to generate activating images. Specifically, starting from a source image, the method changes this image to maximize or minimize the activation in an ROI (though the estimation of a CLIP vector characterizing the ROI from an encoding model). This makes it possible to detect the change in category representations as different regions are maximized. The method can also be used to contrast two ROIs, even ones that process the same category.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The method nicely builds up on previous work (which it acknowledges well).\", \"It allows a more concrete evaluation of images across ROIs due to the presence of the common starting image.\", \"The formulation allows for both maximization and minimization.\", \"It is possible to contrast multiple ROIs along the same brain pathway.\", \"The results makes sense in terms of the discovered selectivities.\"], \"weaknesses\": \"1- More quantification of the observed effects is needed, or at least more details about them.\\n2- More should be discussed in terms of the meaning of the maximizing/minimizing categories and the results. It is not clear what are novel results about the brain and what is a replication.\\n3- The most novel results appear to be in figure 7. However these are not discussed in terms of significance or hypotheses.\\n4- Luo et al 2024 is used in the methods but doesn't figure in the related work section. How does the current work compare with the method used there?\", \"questions\": [\"Regarding Weakness 1: How many images are used to compute the selectivity results (e.g. figure 4), why are those chosen? What is the predicted activity associated with the maximizing and minimizing images? What are the statistics related to these activities? Could it be used to make predictions about how such images would activate or deactivate brain regions?\", \"Regarding Weakness 2: how do we understand the minimizing categories? Is it just a factor of having to step away from the preferred categories or is there truly a reduction in activity from baseline? Is this supported in the literature?\", \"Regarding Weakness 3: What hypotheses can be predicted from the results comparing two regions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarification on a mistake for gamma=0\", \"comment\": \"Our apologies. In the above reply, we made a mistake regarding $\\\\gamma=0$: our code implementation simply passes the original image as a result (without using the diffusion model), hence the metrics (extremely) close to zero.\\nHowever, the rest of the arguments still hold if we instead look at $\\\\gamma \\\\approx 0$ (in this case, $\\\\gamma=0.1$).\"}", "{\"summary\": \"The authors introduce BrainACTIV, a method for generating images modulated by the responses of a specific brain region, enabling interpolation between maximum and minimum activation levels. Additionally, they present two hyperparameters that regulate the semantic and structural variations of the generated images. The authors claim that this approach has the potential to provide insights into neuroscientific experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors first achieved controllable image generation by manipulating a reference image to enhance or suppress specific target brain regions.\\n2. Based on the example images, particularly the interpolated manipulated reference images in Figure 1, the proposed method appears effective, especially concerning the functional regions of interest defined by neuroscience.\\n3. The article is highly readable, thanks to its clear writing and presentation.\", \"weaknesses\": \"1. The motivation needs to be better articulated. While the article emphasizes the controllable modification of the reference image, it lacks a clear rationale for this approach. What advantages does this controllable generation offer for guiding neuroscience discoveries compared to existing methods?\\n2. The technical novelty is limited, as the article primarily implements IP-Adapter and SDEdit to achieve controllable modifications of reference images.\", \"questions\": \"1. A comparison with related work [1-2] would be valuable if feasible.\\n2. The response to brain activation seems to be influenced by the fMRI encoder (specifically the CLIP encoder used in this paper), so its performance should be addressed in the experimental section.\\n3. How do the four mid-level features\\u2014entropy, metric depth, Gaussian curvature, and surface normals\\u2014contribute to advancements in neuroscience discoveries?\\n4. How do the two hyperparameters regulate the trade-off between semantics and structure? Increasing either hyperparameter seems to lead to deviations from the reference image, but it\\u2019s unclear how this establishes a trade-off between the two aspects. What is the rationale behind this approach?\\n\\n[1] Gu Z, Jamison K W, Khosla M, et al. Neurogen: activation optimized image synthesis for discovery neuroscience[J]. NeuroImage, 2022, 247: 118812.\\n\\n[2] Luo A, Henderson M, Wehbe L, et al. Brain diffusion for visual exploration: Cortical discovery using large scale generative models[J]. Advances in Neural Information Processing Systems, 2024, 36.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"At its core, the methodology employs a CLIP-based brain encoder that transforms visual inputs into corresponding brain activation measured with fMRI, with the goal to generate interpretable images that excite or inhibit certain brain areas. The proposed method, BrainACTIVE, allows for incorporating semantic information from modulation embeddings as well as lower-level image information directly into the generation process by seeding a diffusion process specific target images. The authors validate their proposed model by training separate encoders to predict the brain's activity to the synthesized images.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper overall is very well written, clear, with a straightforward exposition and extensive experiements. Sufficient detail is provided to understand all key components of the data, model architecture, and training.\", \"weaknesses\": [\"While the paper is very well presented, I'm taking issue with the related work section, the selection of experiments, and the overall novelty. Considered as a whole, the paper can be improved significantly by addressing these concerns.\", \"**Major Concerns**\", \"*Related works*: there is a wealth of articles surrounding generative models for optimizing stimuli for neuronal data, especially using fMRI, see related literature (1, 2, 3, 4). ]The focus of the related work sections seems to lie on category selectivity in visual cortex and and diffusion models using CLIP. More emphasis on the wealth of related work could be placed to compare the proposed method to other approaches\", \"*Comparing predictive performance*: Having separate encoding models is a clear strength of this paper. However, the predictive performance of the CLIP encoder model was not convincingly shown. Showing that also the CLIP-based encoding model has a comparable performance on the held-out test set (as shown in Fig. 9) as the DINO and fwRF would provide valuable insight. A similarly interesting analysis would be including the CLIP-encoding model into Fig. 3 to understand the change in neuronal activity from the generating model. Lastly, it is important to demonstrate how much of the variations in the images is due to the random seed in the generator models - by training different models and measuring the similarity of the optimal images, and by measuring the variability of activations in the evaluator models (DINO and fwRF)\", \"*Analyses of mid-level images features*: This analysis shows much promise but doesn't yield much insight. To establish the validity of these measures, it would have been important to show that the metrics are consistent for either handpicked or parametric stimuli. Similarly, these metrics could be very useful to study early- to mid-level ROIs.\", \"*Overall novelty*: Taken together, while this approach is promising, it doesn't deliver a new insight into either category selectivity of mid-level image feature processing of visual cortex. The analyses are mostly confirmatory with respect to the current state of understanding of visual processing. The authors could take one of the so called \\\"no-mans-land\\\" ROIs and discover novel categories that drive these regions, such as anterior IT.\"], \"literature\": [\"(1) https://medarc-ai.github.io/mindeye\", \"(2) https://mind-vis.github.io\", \"(3) https://arxiv.org/abs/2305.10135\", \"(4) https://proceedings.neurips.cc/paper_files/paper/2023/hash/67226725b09ca9363637f63f85ed4bba-Abstract-Conference.html\"], \"questions\": [\"The usefulness of the top-nouns analysis is unclear to me, and the authors don't discuss the outcome of these results in the discussion section. In general, applying BrainACTIV seems very promising when applying it to areas in the visual hierarchy that are less well understood, going beyond largely confirmatory analyses. It would be great if the authors could include further discussion points how the top-noun analysis could best be utilized in the discovery of new feature or category selectivities.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for the detailed clarifications, which addressed most of my questions. However, I remain confused about the motivation behind the reference image.\\n\\nFrom my understanding, the reference image serves two purposes: 1. controllable image generation and 2. tuning mid-level features. As the authors mentioned, mid-level features help explain the reference image as a baseline. However, I am still uncertain about the necessity of controllable generation.\\n\\nI completely agree with the authors regarding the random variation in the diffusion model. However, since the results are fixed under the same random seed, horizontal comparisons of the results are possible. In this context, the reference image is determined by the random seed. While this paper extends the customization of the reference image, I still do not fully understand the advantages of doing so.\"}", "{\"title\": \"Response to Reviewer 5W1K (2/2)\", \"comment\": \"> ### **Q1: Comparison to BrainDiVE and NeuroGen**\\n\\nWhile BrainACTIV presents methodological improvements upon BrainDiVE [b] and NeuroGen [c], the lack of a reference image in these works precludes a direct comparison. Moreover, a comparison in terms of image generation quality is not intended, since the three methods can be straightforwardly adapted to any state-of-the-art generative model (diffusion models for BrainACTIV and BrainDiVE, GANs for NeuroGen). Because BrainACTIV retains the strengths presented by these approaches (e.g., fine-grained distinction of similar ROIs, as presented in Section 4.4), we argue that the superiority of BrainACTIV lies in the possibilities opened up by our image manipulation approach, as outlined above. However, we are open to suggestions regarding analyses that could provide a fair comparison between these methods.\\n\\n> ### **Q2: Performance of CLIP encoder**\\n\\nWe strongly agree that the performance of the CLIP encoder provides important context for our results. **Therefore, we have adapted Figure 9 to display the performance of the three brain encoders on the held-out test set for all subjects and ROIs. We have also added the corresponding CLIP encoder predictions to examples in Figure 3(A), as well as predicted modulation results from the CLIP encoder in Figure 3(B)**.\\n\\n> ### **Q3: Mid-level features**\\n\\nThese four mid-level features are not intended to provide a comprehensive analysis of the role of mid-level features representation in the human visual cortex, but rather to serve as an illustration of how BrainACTIV, because of its use of a reference image as baseline, can give precise quantitative metrics of the effects of brain optimization on images and on predicted brain activations. We have **updated Methods section 3.2 to better explain our intention here**, while we have also **supplemented Results section 4.3 with additional references** to relate our observed effects of mid-level features on predicted brain activations to existing neuroscience findings. \\n \\n> ### **Q4: Trade-off between semantics and structure**\\n\\nThank you for raising these questions. **We have revised the original explanation of the two hyperparameters in our text (now Section 4.5) to explain their role more clearly, and we have rephrased the term \\u201ctrade-off\\u201d to avoid confusion**. Moreover, **this section now emphasizes the use of BrainACTIV to produce novel experimental stimuli**.\", \"to_clarify_these_points\": \"both hyperparameters indeed introduce deviations from the reference image, decreasing semantic (CLIP) and structural similarity (LPIPS) to the latter. However, when we look at the two different endpoints (one hyperparameter higher than the other) we are presented with two choices: (1) retain structural similarity while varying semantics or (2) having more freedom in the low-level structure while representing very similar semantic content. Both of them present opportunities when testing neural representations in controlled experiments (e.g., to study the relative contribution of visual versus semantic features, as mentioned in our Introduction). Hence, the rationale behind this section is to briefly present both options to researchers and illustrate their effect on the image.\\n\\n---\\n\\n- [b] Luo, A. F., Henderson, M. M., Wehbe, L., and Tarr, M. J. (2023). Brain diffusion for visual exploration: Cortical discovery using large scale generative models. NeurIPS.\\n- [c] Gu, Z. et al. (2022). NeuroGen: Activation optimized image synthesis for discovery neuroscience. NeuroImage, 247:118812.\"}", "{\"title\": \"Response to Reviewer 82Gz (1/2)\", \"comment\": \"We appreciate the concrete suggestions made by **Reviewer 82Gz**. We have carefully considered them and adapted our paper accordingly. We address each suggestion below and gladly welcome further questions about them.\\n\\n> ### **W1: Related works on stimulus reconstruction**\\n\\nThank you for this important suggestion. Due to the wealth of work on both image reconstruction / brain decoding and data-driven exploration of category selectivity (particularly using the Natural Scenes Dataset), we initially decided to narrow the focus of our Related Work section to studies that specifically focus on optimal visual stimulus generation.\\n\\nFor clarification, we would like to emphasize the two different directions regarding brain-conditioned image generation: the first one tackles (exact) reconstruction of observed stimuli based on the brain activity patterns they elicited ([a][b][c]) while the second one concerns the generation of novel stimuli that activates a specific brain region. These two directions share many components (particularly, those using diffusion models and fMRI data), and the methods can often be adapted to perform both of them (see [d][e]). We therefore acknowledge the vast amount of work performed on image reconstruction from brain activity, and its importance to our work. **Accordingly, we have adapted the Related Work section by adding the suggested citations**. Because our work pertains solely to the second mentioned direction, we preserve the focus of this section on works that deal with optimal visual stimulus generation aiming to provide better context on the evolution of this subfield of research.\\n\\n> ### **W2: Performance of CLIP encoder**\\n\\nWe strongly agree that the performance of the CLIP encoder provides important context for our results. **Therefore, we have adapted Figure 9 to display the performance of the three brain encoders on the held-out test set for all subjects and ROIs. We have also added the corresponding CLIP encoder predictions to examples in Figure 3(A), as well as predicted modulation results from the CLIP encoder in Figure 3(B)**. \\n\\nDue to time constraints, it was not possible to provide a comprehensive analysis on the effect of random initialization of the encoders and generative model on our results. However, to provide further evidence that BrainACTIV minimizes the effect of random variability from the generator model, and better isolates the effect of brain optimality, **we have displayed example variations for FFA and PPA over multiple different seeds (for the diffusion model) in Appendix A.10**.\\n\\n---\\n\\n- [a] Scotti, P. S. et al. (2023). Reconstructing the Mind\\u2019s Eye: fMRI-to-Image with Contrastive Learning\\nand Diffusion Priors. NeurIPS.\\n- [b] Chen, Z. et al. (2023). Seeing beyond the brain: Masked modeling conditioned diffusion model\\nfor human vision decoding. CVPR.\\n- [c] Zeng, B. et al. (2023). Controllable mind visual diffusion model. arXiv preprint. arXiv:2305.10135.\\n- [d] Ozcelik, F. and VanRullen, R. (2023). Natural scene reconstruction from fMRI signals using generative latent diffusion. Nature Scientific Reports, 13.\\n- [e] Papale, P., De Luca, D., and Roelfsema, P. R. (2024). Deep generative networks reveal the tuning of\\nneurons in IT and predict their influence on visual perception. bioRxiv, pages 2024\\u201310.\"}", "{\"title\": \"Computation of additional baselines\", \"comment\": \"Thank you for this clarification. **We have added Appendix A.12 to tackle these points and illustrate the explanation below**:\\n\\nSince BrainACTIV injects brain-derived conditions (namely, the intermediate embeddings from equation 5) into the generation process using IP-Adapter, we assume that generation \\\"without brain-conditioned targeting\\\" means that we use IP-Adapter with $\\\\alpha=0$ (equivalent to simply passing the reference image through IP-Adapter, as would be commonly done for generating image variations).\\n\\nHence, for computing additional baselines, we take MSE (and LPIPS) between the reference image and the synthesized images using different values of SDEdit's $\\\\gamma$ (i.e., $\\\\alpha=0$ and $\\\\gamma \\\\in [0,0.1,...,0.9,1]$). Intuitively, the difference between these synthesized images is that $\\\\gamma=1$ shows how CLIP \\\"interprets\\\" the semantic content in the reference, while $\\\\gamma=0$ has no effect from CLIP and $\\\\gamma$'s in-between are an interpolation between these two endpoints. Hence, all of these represent different interpretations of what would be considered \\\"unconditional image variation\\\" in the context of BrainACTIV. For example, Figure 32 values at $\\\\gamma=1$ could be considered a lower baseline, while values at $\\\\gamma=0.6$ could represent an upper baseline.\", \"for_clarification\": \"in our first response, we understood you referred to the specific case of $\\\\alpha=0, \\\\gamma=0$, which leads to metrics very close to zero. However, using the wider range of $\\\\gamma$ indeed provides important context into the effect of brain conditioning on structural control w.r.t. unconditional image variation.\"}", "{\"summary\": \"The paper introduces a new model BrainACTIV to generate synthetic images optimizing specific brain responses from specific areas. Their model incorporates recent works on diffusion models conditioning to generate images that vary structurally with brain signal for better interpretations of brain representations and downstream analysis.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is novel in tackling a common interpretability problems faced among works of synthesizing image based on brain signal. These synthetic images usually requires a lot of guesswork to confirm they are representing information from the brain. The IP-Adaptor is a clever way to induce structural biases into diffusional models so that the generated image vary reliably on the axis that encode the most salient representation that a brain areas cares about. This model has great potential to be extended to different brain data modality and it seems like it only requires a CLIP based brain encoding model for adaptation.\\n\\nThe second part of the paper analyzing mid-level features provides good examples of potentially making inferences from these synthetically generated images, further making use of the interpretability of the method.\", \"weaknesses\": \"The paper should perhaps incorporate predictive performance the Dino-ViT and fwRF encoder of these brain areas in Figure 3b so reader could have a better sense of how big of a difference the manipulation of images could make.\\n\\nMy main critique of this paper is perhaps more high level. Now that we have a model that could generate very clean optimized images for specific brain areas, can one use it for making new discoveries/inferences about how the brain represent visual information? Most of the work that synthesize images with brain responses has been proving that model can generate what we already know about the brain (e.g. Face image with FFA data) but rarely do they inform what we don\\u2019t already know. I am aware this paper might want to establish itself as a proof that this method could work in generating testable hypothesis but I am not yet convinced that it is substantially different than all the other papers in this subfield.\", \"questions\": \"line 323: what's the reasoning behind choosing images with responses closest to baseline activation?\", \"table_1\": \"it might help to show with image what you means by \\\"preserve structural fidelity\\\" since \\\"preserved\\\" can be a relative concept. It might even be helpful to compute the numbers for images synthesized from other papers for comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a method for manipulating a reference image so as to selectively enhance (or decrease) the corresponding predicted brain activity in a given target region. The image manipulation is performed with pretrained diffusion models, and the brain activity prediction relies on training a brain decoder to perform regression between a large image dataset (NSD) and the corresponding brain activation patterns.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The proposed image manipulation technique is non-trivial, and goes much beyond deploying an off-the-shelf diffusion system. It leverages state-of-the-art diffusion systems and adapters.\", \"The manipulated images are convincing, and compatible with known selectivity in the target brain regions.\", \"The technique allows to draw further subtle distinctions between brain regions that have been traditionally associated with similar category selectivity, and could be used to inform future neuroscience experiments.\", \"The method allows to control the trade-off between semantic variation and structural similarity with two hyperparameters.\"], \"weaknesses\": \"The paper shares weaknesses with most brain decoding studies, in the sense that the results describe predictions of a model trained from a specific dataset and a specific (combination of) deep learning model(s). These predictions will ultimately require validation from actual experiments. However, this is well acknowledged in a paragraph on \\u201climitations\\u201d.\", \"questions\": [\"\\u201cwe opt to average the embeddings over all subjects, excluding the subject on which predictions are made. Hence, we modulate brain activity in each subject through a signal derived exclusively from the rest of the subjects\\u2019 data\\u201d: This statement is vague enough to be interpreted in different ways (as there are multiple components in the pipeline that could be calculated over one or multiple subjects). I would suggest clarifying things with the corresponding variable names from equations (1-5).\", \"On line 224, the methods description switches from computation of variables related to the target region of interest (e.g. $z_{max}$), to the introduction of the specific reference image and its manipulations $z_I$. It would help to state this explicitly.\", \"Table 1 provides structural metrics for manipulated images together with a lower baseline computed from random images. I would suggest adding an upper baseline calculated from diffusion-based image variations without brain signal conditioning.\", \"In Figure 6, it could be helpful to include the reference image (e.g. in the top-left corner).\", \"The literature review on \\u201coptimal visual stimulus generation\\u201d misses (at least) one reference from the BrainDiffuser paper (Ozcelik et al, 2023).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 5W1K (1/2)\", \"comment\": \"We thank **Reviewer 5W1K** for the constructive review of our work. We address BrainACTIV\\u2019s motivation and concerns regarding technical novelty below. Further, we have carefully considered each of the comments and revised our paper accordingly.\\n\\n> ### **W1: Advantages of image manipulation**\\n\\nOur work indeed follows an existing line of research attempting to formulate new hypotheses about stimulus representation in the human visual cortex through data-driven analyses (facilitated by the recent availability of large naturalistic neuroimaging datasets). Existing methods generate images that maximize (predicted) activations of a target brain region, enabling the qualitative interpretation of visual/semantic properties that may be preferred by this region. However, none of them have explicitly enforced structural constraints on the generations; thus, these individual images have no explicit point of comparison. Consequently, their conclusions rely on the joint interpretation of a large number of images through, e.g., human behavioral studies to discern image features that are of relevance from those that were randomly generated by the model (hence irrelevant for driving activations).\\n\\nThe introduction and modification of a reference image provides a direct comparison point for each synthesized stimulus, bringing about several benefits:\\n\\n- Straightforward interpretation of image features that are relevant to the brain region (because random variability from the diffusion model is minimized).\\n- Possibility to quantify these relevant features through automated methods from computer vision (because the values for reference and variation images are directly comparable). Hence, there is no longer a need for human subjects to interpret hundreds of images, and the computational costs are vastly reduced.\\n- The produced image variations represent a hypothesized tuning axis for the target region or group of voxels, with the reference image serving as a control stimulus. Hence, BrainACTIV can be employed by researchers in novel neuroscientific studies to study fine-grained selectivity properties.\\n\\nTo make this motivation clearer in the text, **we have rephrased several sentences in the Abstract and Introduction, and we have reordered and expanded previous Results sections 4.3 to 4.5 to better convey these advantages**.\\n\\n> ### **W2: Technical novelty**\\n\\nWe acknowledge that BrainACTIV primarily implements existing methods for controllable image generation. However, we believe in the importance of bringing tools and ideas from machine learning closer to adjacent scientific fields (particularly, computational neuroscience), which is why we marked \\u2018applications to neuroscience/cognitive science\\u2019 as our primary area The fast-paced development of new models and methods in machine learning leaves many potential applications unexplored, missing out on significant impacts to these fields. We argue that BrainACTIV represents an innovative application of controllable diffusion-based image generation to advance our understanding of the human visual cortex.\\n\\nBrainACTIV is the first work to explore image variation with respect to a reference image using generative models to (1) formulate novel hypotheses about stimulus representation in the human visual cortex and (2) produce controlled experimental stimuli for novel neuroscientific experiments.\\n\\nIn addition, the incorporation of IP-Adapter and SDEdit significantly reduces the computational costs needed to generate conclusions, relative to BrainDiVE. Our image generation process takes ~80 hours for all subjects on an NVIDIA A100, while Luo et al. (2023) [b] report 1,500 hours on an NVIDIA V100 (reports indicate A100 is only ~2 to 4x faster than V100 [a], while our results take ~9x less time, accounting for the difference in Stable Diffusion versions). **We have briefly mentioned this advantage in line 350**. Moreover, we remove the need for human behavioral studies to analyze the generated images, further saving time and costs.\\n\\n---\\n\\n- [a] https://lambdalabs.com/blog/nvidia-a100-vs-v100-benchmarks \\n- [b] Luo, A. F., Henderson, M. M., Wehbe, L., and Tarr, M. J. (2023). Brain diffusion for visual exploration: Cortical discovery using large scale generative models. NeurIPS.\"}", "{\"title\": \"Thank you for detailed response\", \"comment\": \"Thank you for detailed responses.\\n\\nI went through Appendix A.7 and that somewhat validates what I was arguing. Obvious semantic categories can be easily pulled out with this method (component 1 for food for example). These areas very likely show up as well in published localizer experiments (e.g. food ROIs in Jain et al. 2023) and this method does well in validating them. For areas identified by component 2 and 4, it is still hard to interpret what they are coding for and whether there should be a semantic category label attached to it at all. However I do still think this is progress compared to what people are capable to do before with ROI localizer and encoding models.\"}", "{\"comment\": \"Thanks for the clarification. The face perception case study illustrates the motivation of BrainACTIV well, which is to manipulate \\\"real images\\\" with brain patterns to study \\\"specific\\u201d neural representations. The authors propose spherical interpolation in CLIP space to provide conditional embedding for IP-adapter to achieve the above goal.\\n\\nI would increase the rating score due to its solid approach and insights into neuroscience research.\"}", "{\"title\": \"Summary of our response\", \"comment\": [\"## We thank all reviewers for their helpful and constructive comments.\", \"We were pleased to see that all reviewers considered the soundness and presentation of our paper to be good to excellent, highlighting its clarity, innovativeness and potential for new neuroscientific discovery. In our revision, we implement substantial changes that we hope will better convey the novelty of BrainACTIV and its value as a scientific contribution. Since some comments were shared across multiple reviewers, we highlight the most significant revisions here:\", \"To stress the value of our novel method of image variation for neuroscientific hypothesis testing, we have reordered and expanded previous Results sections 4.3 to 4.5, resulting in a more logical flow from reproducing established findings to novel insights gained by BrainACTIV.\", \"We have expanded prior literature to better acknowledge decoding/reconstruction literature that BrainACTIV builds upon.\", \"We now report CLIP encoder performance in the main text Figure 3 and Appendix A.2.\", \"To showcase how BrainACTIV can generate novel insights beyond already-known category-selective regions, we have added additional analyses of early visual and anterior IT ROIs in Appendix A.6 and Appendix A.7, respectively.\", \"Also, while not explicitly requested by any reviewer, we replaced the original projection set for a much larger one (line 302) to reduce bias in the results, so we updated all figures accordingly.\", \"We provide more specific responses to all reviewer comments below. We are hopeful that with these revisions, all reviewers will achieve consensus on the valuable contribution of BrainACTIV.\"]}", "{\"title\": \"Response to Reviewer mur3 (1/2)\", \"comment\": \"We thank **Reviewer mur3** for the positive comments and insightful questions. We have revised our paper based on these questions to provide more context into the relevance of our work. We will address your questions and suggestions below.\\n\\n> ### **Quantification / details of the observed effects**\\n\\n### *How many images are used to compute the selectivity results (e.g. figure 4), why are those chosen?*\\n\\nAll analyses in Section 4.3 (specifically, Figure 4 and Figure 5) use the results from our main experiment (Section 4.2). Namely, 20 variations for each of 120 test images in each subject. To aggregate these results into a single figure, we average over all subjects and all images. We make this particular choice because the modulation embeddings z_max and z_min, used as endpoints to generate the subject- and ROI-specific image variations, are highly similar across subjects (please see the newly added Appendix subsection A.8). Hence, the semantic and mid-level features modified in the images vary minimally across subjects. **We have updated the first paragraph of Section 4.3 to clarify this information in the main text.**\\n\\n### *What is the predicted activity associated with the maximizing and minimizing images?*\\n\\nFigure 3 (A) displays example results for our maximization and minimization experiment from Section 4.2. For each of them, we show the activity of the target region of interest as predicted by each of our three brain encoders (**based on the other reviews, we have updated Figure 3 to also display predictions from our CLIP encoder**). We further show the average of these values over all images and subjects (with a reasoning similar as mentioned above) in Figure 3 (B) to evidence BrainACTIV\\u2019s potential to modulate activity in these regions. \\n\\n### *Could it be used to make predictions about how such images would activate or deactivate brain regions?*\\n\\nWhile the effect of our synthesized images must be verified in future work by presenting them to new subjects in an MRI scanner, the observation of a clear pattern across ROIs in Figure 3 (B), as well as the use of two different encoders (namely, DINO-ViT and fwRF) that are not based on CLIP, provides robust support for BrainACTIV\\u2019s applicability in such online experimental settings. **We have also updated Figure 9 to display the predictive performance of these three encoders**, in order to provide further context to the predictions.\\n\\n> ### **Discussion about minimizing categories**\\n\\n### *How do we understand the minimizing categories? Is it just a factor of having to step away from the preferred categories or is there truly a reduction in activity from baseline?*\\n\\nHere, minimization should be understood as the ROI\\u2019s level of activity relative to the reference image. Compared to the no-stimulation baseline, category-selective ROIs typically respond positively to all images (including non-preferred categories; see Fig 3 and 4 in [a] for examples), but much more strongly to the preferred category. In our analysis, we purposely chose reference images from NSD that yielded average activations for each ROI, so lying somewhere between the minimum and maximum for that ROI. BrainACTIV then exploits this range in activity to push the reference towards the tails of the distribution of (positive) activations for that ROI . Hence, our minimization should not be interpreted as \\u2018deactivating\\u2019 or suppressing fMRI activations, but rather as leveraging the full range of activation rather than only considering the top-activating images, as is typically done in MEI studies. **We have updated our phrasing in the Abstract, and main text to clarify this**. \\n\\n### *Is this supported in the literature?*\\n\\nIn terms of the observed results with minimization, these are broadly consistent with the known literature (e.g. face-selective regions respond the least to places and vice-versa), but some of the minimization effects we observe are, to our knowledge, not yet documented in the literature and could yield new hypotheses about these regions that can be tested in future studies. **We have updated the first paragraph of Section 4.3 to explicitly mention this in the main text**.\\n\\n---\\n\\n- [a] Groen IIA, Silson EH, Pitcher D, Baker CI. (2021). Theta-burst TMS of lateral occipital cortex reduces BOLD responses across category-selective areas in ventral temporal cortex. NeuroImage (230) 117790 https://doi.org/10.1016/j.neuroimage.2021.117790\"}", "{\"title\": \"Response to Reviewer eXwK\", \"comment\": \"We appreciate the positive feedback and helpful suggestions from **Reviewer eXwK**. We address each of them below.\\n\\n> ### **Q1 and Q2: Clarity in text**\\n\\nThank you for this valuable feedback. **We have updated lines 228 and 345 to make these variables more explicit in the text**.\\n\\n> ### **Q3: Additional baseline in Table 1**\\n\\nWe agree that the baseline of random images constitutes a lower baseline and that it would be useful to have an upper baseline as well, to quantify the penalty on structural control associated with brain-conditioning . However, we are not entirely sure what insight we would get from comparing to variations without brain conditioning. In our view, this would emphasize that the structural control as implemented by SDEdit is working correctly, but the distance values would be really close to zero since the variation and the target would be almost identical. We welcome further elaboration on the added value of this suggested baseline or other potentially useful baselines. \\n\\n> ### **Q4: Improvement to Figure 6 (now Figure 7)**\\n\\nWe agree in adding the original image in old Figure 6 (now Figure 7). We have omitted the row and column corresponding to $\\\\alpha=0$ and $\\\\gamma=0$ from the examples, since these are highly redundant. **We have added the reference image in Figure 7(A) within a simple graphic explaining the usefulness of BrainACTIV for producing novel neuroscientific stimuli**. This accompanies changes made to Section 4.5 to emphasize this contribution based on reviews.\\n\\n> ### **Q5: Literature review missing references**\\n\\nThank you for pointing this out. We acknowledge the vast amount of work performed on image reconstruction from brain activity, and its connection to our work as an alternative way to perform brain-guided image synthesis. **Accordingly, we have added relevant citations in our Related Work section**. Particularly, the work by Ozcelik et al. (2023) [a] is a good example of image reconstruction methods that can be adapted to generate optimal inputs for specific ROIs. **We have added citations to this work in lines 60 and 140**.\\n\\n---\\n\\n- [a] Ozcelik, F. and VanRullen, R. (2023). Natural scene reconstruction from fMRI signals using generative latent diffusion. Nature Scientific Reports, 13.\"}", "{\"title\": \"Response to Reviewer mur3 (2/2)\", \"comment\": \"> ### **Hypotheses predicted from the comparison between regions**\\n\\nFigure 7 (now Figure 6) indeed demonstrates how BrainACTIV can be used to reveal novel insights about fine-grained differences between ROIs with the same category-selectivity. In section 4.4, we describe the apparent differences between images that are accentuated for each pair of ROIs with the same category selectivity. By having these accentuations embedded in the image generation process, BrainACTIV directly generates new hypotheses about the differences between the ROIs, which can then be tested in future experiments by using the generated images as stimuli. **We have changed the order of presentation of the results** so that Figure 7 now follows directly after Figure 5, highlighting the novelty of the hypotheses generated through the accentuation method, relative to the top-nouns analyses or ROI comparisons done in e.g. BrainDiVE [b]. **In addition, we have edited our phrasing in section 4.4. to explicitly note that we consider the observed differences as new hypotheses**. \\n\\n> ### **Comparison to BrainSCUBA (Luo et al., 2024)**\\n\\nThank you for this important remark. \\n\\nSimilar to our work, BrainSCUBA [c] attempts to solve a problem with current optimal (visual) stimulus generation methods (particularly, BrainDiVE). Namely, the reliance on visual inspection of synthesized images to formulate new hypotheses about category selectivity, and their consequently low interpretability. Their method further employs a similar CLIP-based brain encoder as in our work. However, the main focus of their approach is on the generation of interpretable (natural language) captions that elucidate semantic selectivity properties at the voxel-level. While they additionally use these captions to prompt text-to-image diffusion models (yielding cleaner images than BrainDiVE), we consider that the methodological problems we point out in Section 2\\u2014independently generating one stimulus at a time\\u2014remain in BrainSCUBA. Moreover, we believe images still yield richer semantic and low-level information than natural language captions, and work on computer vision can be leveraged to quantify highly specific properties (as we attempt in Section 3.2). Finally, BrainACTIV additionally tackles the production of controllable stimuli for novel scientific experiments (**see revised Section 4.5**). Still, the work on BrainSCUBA has been really valuable for BrainACTIV\\u2019s implementation, and its alternative way to tackle current limitations in the field through analyses on natural language is worth investigating further.\\n\\nBecause there is an increasingly large number of studies on data-driven investigation of cortical selectivity (particularly using the Natural Scenes Dataset), we decided to narrow the focus of our Related Work section to those that specifically focus on generation of novel visual stimuli that maximizes activity in target brain regions. **However, we have added a sentence to our Discussion highlighting how the adaptation of BrainACTIV\\u2019s manipulation approach to natural language could yield valuable insights on semantic selectivity**.\\n\\n---\\n\\n- [b] Luo, A. F., Henderson, M. M., Wehbe, L., and Tarr, M. J. (2023). Brain diffusion for visual exploration: Cortical discovery using large scale generative models. NeurIPS.\\n- [c] Luo, A. F., Henderson, M. M., Tarr, M. J., and Wehbe, L. (2024). BrainSCUBA: Fine-grained natural language captions of visual cortex selectivity. ICLR.\"}", "{\"title\": \"Response to Reviewer Dc87\", \"comment\": \"We thank **Reviewer Dc87** for the positive evaluation of BrainACTIV and the insightful questions regarding novel discoveries. We address these below, and incorporate the concrete suggestions into our paper.\\n\\n> ### **W1: Predictive performance**\\n\\nThank you for this suggestion. We agree that the predictive performance of the encoders provides valuable context to interpret the predictions in Figure 3(B). Because of space constraints, we have included these performance metrics in Appendix subsection A.2. **Based on other reviewer comments, we have also updated this subsection to display the predictive performance of our CLIP encoder. Accordingly, we have updated Figure 3 with the corresponding CLIP encoder predictions**.\\n\\n> ### **W2: New discoveries**\\n\\nThe discovery of novel or finer-grained stimulus representations and organizational principles in the visual cortex is a very important topic in our line of research, and one that must certainly be emphasized in follow-up studies through collection of brain responses to synthesized images in the MRI scanner. **We have included an example on the use of BrainACTIV to formulate hypotheses for less well-understood regions (particularly, anterior IT cortex) in Appendix A.7**.\\n\\nBrainACTIV stands out from current approaches to optimal visual stimulus generation in that our synthesized stimuli are directly comparable to a reference image, whose structural properties are preserved. The relevance of this contribution can be understood from different points of view: \\n\\n- The successful modulation of predicted brain responses through changes in semantic categories in the image (while controlling for low-level structure) enables a strong test (and evidence for) the (well-known) category selectivity theory (Figure 2).\\n- BrainACTIV can be used by experimenters to produce controlled experimental stimuli, where the hypothesized tuning axis of a group of voxels is derived in a data-driven manner. Additionally we describe how researchers can select the degree of structural control and semantic variation (Section 4.5).\\n- By manipulating a reference image (instead of generating novel images from randomly sampled noise), the analyses required to formulate new hypotheses become significantly more efficient in terms of compute costs (less images are needed) and automatization (no need for human inspection of hundreds of images).\\n\\nAdditionally, we present a novel way to generate fine-grained hypotheses about ROIs with similar category preferences in Section 4.4, by isolating what distinguishes one ROI from the other and directly accentuating it in an image. We believe this contribution can be used not only to explore selectivity in novel regions of interest, but also to expand our knowledge about well-known category-selective regions.\\n\\n> ### **Q1: Closeness to baseline activation**\\n\\nThank you for the opportunity to clarify this choice. We select images closest to baseline activations primarily for efficiency purposes: because the ROI-specific modulation embeddings point in the same direction regardless of the image, the semantic categories that appear in the manipulations vary minimally across images. Hence, we avoid the manipulation of the entire test set (1,000 images) and instead focus on a smaller subset. Using images with activation close to baseline probabilistically represents an \\u201caverage\\u201d visual stimulus (i.e. there are significantly more stimuli with baseline activation than with optimal activation) and intuitively provides more room for maximizing/minimizing activity. Because this choice could bias the selection (e.g., towards a single semantic category that always elicits baseline activation), we employed six mutually-exclusive image subsets to enforce the diversity of the 120-image set.\\n\\n> ### **Q2: Structural fidelity**\\n\\nThank you for pointing out this ambiguity. **We have updated Table 1\\u2019s caption and line 361 to emphasize our goal**: reference and variations should be as similar as possible in spatial structure and color palette. Because relevant works (e.g., NeuroGen [a] and BrainDiVE [b]) do not use a reference image or control the low-level structure of their images, a direct comparison in this table is unfortunately not possible.\\n\\n---\\n\\n- [a] Gu, Z. et al. (2022). NeuroGen: Activation optimized image synthesis for discovery neuroscience. NeuroImage, 247:118812.\\n- [b] Luo, A. F., Henderson, M. M., Wehbe, L., and Tarr, M. J. (2023). Brain diffusion for visual exploration: Cortical discovery using large scale generative models. NeurIPS.\"}", "{\"metareview\": \"This paper introduces BrainACTIV, a method for generating images that optimize brain activation in specific regions of interest (ROIs) by manipulating a reference image using diffusion models. The approach allows for both maximizing and minimizing brain activity, offering a new way to contrast and analyze brain regions' selectivity for different categories.\\n\\nWeaknesses identified in the review, such as the need for more quantification of the effects, clearer discussions on the significance of results, and further exploration of comparisons to related methods, were mostly addressed during the authors' rebuttal.\\n\\nThe paper's strength lies in its novel approach to brain region-specific image manipulation and its potential to refine how we understand brain activation. Therefore, I recommend accepting the paper for its innovative methodology and valuable contributions to neuroscience.\", \"additional_comments_on_reviewer_discussion\": \"The authors addressed multiple reviewer concerns regarding the BrainACTIV methodology, providing clarifications and additional details. They explained that the minimization process explores a wider range of activations relative to a reference image, rather than deactivating brain regions, and suggested that it could generate new hypotheses for future experiments. They also emphasized how BrainACTIV accentuates differences between regions with similar category selectivity, facilitating hypothesis generation. The authors clarified the performance of the CLIP encoder, incorporating comparisons in updated figures, and addressed the analysis of mid-level features and early visual cortex regions by explaining their role in illustrating the tool's potential. They provided more context for the top-nouns analysis, which helps highlight differences between regions selective for the same category. Further, the authors justified using images closest to baseline activations for efficiency and explained the rationale behind baseline comparisons, including an appendix with additional baseline computations. Regarding the novelty of BrainACTIV, they clarified that the integration of existing methods with novel applications in neuroscience, such as IP-Adapter and SDEdit, represents a significant computational advancement. I consider these comments to be the most important and they were all addressed by the authors.\"}", "{\"comment\": \"Clarification regarding the possibility of an additional baseline: The table currently shows an MSE of 79.2 between the reference image and random images, versus 30 (approximately) for the targeted image variations. In your response, you seem to assume that SDEdit + ImageAdapter variations of the reference image (without brain-conditioned targeting) would give an MSE near zero, but my understanding is that diffusion-based image variations are not identical copies of the reference image, so I'm just asking what would be the correponding MSE. I think it makes a difference if it is, e.g., 20-30 (meaning that the targeted image variation is very close to an unconditional image variation) or if it is truly close to 0.\"}", "{\"comment\": [\"Thank you for the opportunity to clarify the motivation for the reference image. In short,\", \"**(1)** The reference image is a technical requirement for interpolating in CLIP space (Equation 5) to provide the necessary high- and mid-level information to the synthesized image variations through IP-Adapter.\", \"**(2)** We control the low-level structure of the synthesized image variations by using SDEdit on a *real* reference image (as opposed to only fixing the random seed) for better integration of BrainACTIV into neuroscientists\\u2019 own experiments.\"], \"for_clarification\": \"in all our analyses, we use (real) test images from the Natural Scenes Dataset as reference inputs.\\n\\n**Our latest revision incorporates slight modifications to lines 74, 79, and Sections 3.1, 4.2, and 4.5 with the aim of improving the presentation of the above information**. We provide a more elaborate explanation for this below.\\n\\nBrainACTIV's image generation process depends on two components:\\n\\n1. IP-Adapter for providing the (brain-optimized) *high-level* and *associated mid-level* information to the synthesized image variations.\\n\\nThe adapter inputs our intermediate embeddings (Equation 5) as conditions to the diffusion model. These embeddings are obtained through spherical interpolation in CLIP space between a visual stimulus z_I and a brain-derived optimal endpoint z_max (from Equation 2). As such, we require a reference visual stimulus to perform this computation. Moreover, as you mention, this reference indeed acts as a baseline for the synthesized stimuli in terms of high and mid-level information.\\n\\n2. SDEdit to obtain and fix the initial diffusion latents, thus reproducing the *low-level* (structural) information of the reference image in the synthesized image variations. \\n\\nYou are correct in pointing out that fixing the diffusion model's latents through a random seed would already impose the same low-level structure to all image variations, directly allowing comparisons (to the reference) in terms of pixel-wise mid-level features and semantic content (assuming the reference image would also be generated by the diffusion model). In that setting, one could indeed refrain from using SDEdit and obtain similar results for our analyses (i.e., Figures 4 and 5). \\nHowever, we opt to use SDEdit on a real reference image\\u2014as opposed to fixing the latents through a random seed\\u2014for better integration of BrainACTIV into neuroscientific hypothesis-driven experiments, enabling researchers to input any pre-selected control stimulus.\\n\\n> ### Concrete example\\n> These experiments could test hypotheses about semantic tuning that are minimally confounded by low-level structural properties. To give a concrete example: in neuroscientific research on face perception, it continues to be debated whether brain responses to images containing faces reflect tuning to \\u2018face-specific\\u2019 features or rather to domain-general object features (e.g. curved contours); see Vinken et al., (2023) ([a], also cited in Introduction). In this setting, BrainACTIV's incorporation of a user-input reference image a) allows it to serve as a control stimulus and brain activation baseline to which newly measured brain responses to generated stimuli can be compared, and b) to keep constant the presence of domain-general features while only changing face-specific features, thus allowing for the potential isolation of the effect of such features on brain responses in face-selective brain regions. \\n\\nWe hope this information has clarified the motivation behind the reference image, and we welcome further conversation regarding it.\\n\\n---\\n\\n- [a] Vinken K., Prince JS, Konkle T, and Livingstone MS. The neural code for \\u201cface cells\\u201d is not face-specific. Sci. Adv. 9 ,eadg1736 (2023). DOI:10.1126/sciadv.adg1736\"}", "{\"title\": \"Response to Reviewer 82Gz (2/2)\", \"comment\": \"> ### **W3: Mid-level features and early ROIs**\\n\\nWe agree that our analysis of mid-level features is limited in scope and requires further validation in future work. We would like to clarify that these four mid-level features are not meant to form a comprehensive analysis of the role of mid-level features representation in the human visual cortex, but rather to serve as an illustration of how BrainACTIV\\u2014because of its use of a reference image as baseline\\u2014can give precise quantitative metrics of the effects of brain optimization on images and on predicted brain activations. **We have updated Methods section 3.2 to better explain our intention here**, while we have also **edited Results section 4.3** to highlight how these findings connect with the existing literature, and that they should be taken as indicative of BrainACTIVs potential for new hypothesis generation, rather than definitive proof for mid-level feature representation in the brain. As additional context for these features, **we have included Appendix A.11 to illustrate how each of the features is represented at extreme values in NSD**.\\n\\nWe also agree that these metrics (and the applicability of BrainACTIV in general) could be further illustrated by targeting early- to mid-level regions. **Accordingly, we have added Appendix A.6 to illustrate results on areas V1 through V4**. These results highlight BrainACTIV\\u2019s success in modulating predicted responses, as well as its limitations regarding the use of CLIP\\u2019s image space as a model of the target ROI. Namely, even though these regions are known for their selectivity to low- and mid-level properties, the use of a semantic image space causes the changes within the manipulations to be mainly high-level. Interestingly, however, these high-level changes can be interpreted as real-life occurrences of high- and low-level properties, for example, bright light -> light bulbs. **We quantify this in Figure 19, where we observe stronger changes in features like color saturation and entropy for V1-V4 than for category-selective ROIs (Appendix A.8)**.\\n\\n> ### **W4 and Q2: Overall novelty and no-man\\u2019s-land**\\n\\nWe appreciate the suggestion regarding the application of BrainACTIV to less-understood ROIs. We acknowledge that our method, together with many recent studies on data-driven investigation of neural representation, must emphasize the discovery of novel or finer-grained selectivity and organizational principles in the visual cortex. **Based on this suggestion, we have introduced Appendix A.7 with a brief example on how BrainACTIV can be adapted to investigate selectivity in anterior IT**. \\n\\n> ### **Q1: Top-nouns analysis**\\n\\nThank you for raising this concern and for acknowledging BrainACTIV\\u2019s potential. The top-nouns analysis is meant to illustrate that BrainACTIV effectively discerns between regions that are selective to the same broad category (in this case, places \\u2192 OPA/PPA/RSC). This distinction is visually apparent in the image manipulations, but not in our category analyses (Figure 4) since the 16 broad categories we choose in Section 3.2 are not sufficiently fine-grained. Hence, we use individual nouns (obtained from WordNet as outlined in Appendix subsection A.1), showing that the specific set of nouns highlighted for each region emphasizes differences in selectivity across them. **We have changed the order of presentation of the results** so that the top-nouns analysis now accompanies the accentuation of differences between ROIs in Section 4.4 (hence in the new Figure 6).\", \"for_additional_clarification\": \"we performed this analysis to show that BrainACTIV retains one of the main strengths of BrainDiVE [f]. Namely, characterizing differences between ROIs selective for the same high-level category. We later expand on this by contributing a way to isolate these differences and accentuating them through image manipulation. **We have updated the text (Section 4.4) to include the explanation above**.\\n\\n---\\n\\n- [f] Luo, A. F., Henderson, M. M., Wehbe, L., and Tarr, M. J. (2023). Brain diffusion for visual exploration: Cortical discovery using large scale generative models. NeurIPS.\"}" ] }
CGOH2j1m0b
LipFed: Mitigating Subgroup Bias in Federated Learning with Lipschitz Constraints
[ "Khotso Selialia", "Yasra Chandio", "Jimi Oke", "Fatima M. Anwar" ]
Federated learning (FL) has emerged as a promising paradigm for training decentralized machine learning models with privacy preservation. However, FL models are biased, which can lead to unfair model outcomes towards subgroups with intersecting attributes. To address this, we propose LipFed, a subgroup bias mitigation technique that leverages Lipschitz-based fairness constraints to mitigate subgroup bias in FL. We evaluate LipFed's efficacy in achieving subgroup fairness across clients while preserving model utility. Our experiments on benchmark datasets and real-world datasets demonstrate that LipFed effectively mitigates subgroup bias without significantly compromising group fairness or model performance.
[ "Federated Learning", "Fairness" ]
https://openreview.net/pdf?id=CGOH2j1m0b
https://openreview.net/forum?id=CGOH2j1m0b
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zQR4BdS9Mh", "lO8N2GTttV", "GEitEHWWRF", "3g6rfalTvE", "0KqHrzLckB" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730662238847, 1730681471231, 1730331596175, 1730881460469, 1731979261473 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5258/Reviewer_voY5" ], [ "ICLR.cc/2025/Conference/Submission5258/Reviewer_NSYu" ], [ "ICLR.cc/2025/Conference/Submission5258/Reviewer_s158" ], [ "ICLR.cc/2025/Conference/Submission5258/Reviewer_dsd6" ], [ "ICLR.cc/2025/Conference/Submission5258/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a novel approach, LipFed, to address subgroup bias in federated learning by applying Lipschitz constraints. The problem addressed in this work is interesting and novel, tackling an important fairness issue in FL systems. Theoretical analysis provides solid foundations for the proposed method, and empirical results demonstrate significant improvements in subgroup fairness without major utility losses. However, incorporating comparisons with more recent baselines, expanding ablation studies, and refining the structure and detail in certain sections would further strengthen the paper\\u2019s impact and clarity.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe paper addresses a novel problem by applying Lipschitz constraints to mitigate subgroup bias in federated learning.\\n2.\\tThe theoretical analysis in this paper is solid, with well-defined bounds and comprehensive proofs that enhance the credibility of the proposed approach to achieving fairness in federated learning.\", \"weaknesses\": \"1.\\tThe paper utilizes baseline methods AFL (Mohri et al., 2019), TERM (Li et al., 2020), and\\nGIFAIR-FL (Yue et al., 2021). However, recent years have seen advancements in federated \\nlearning fairness and efficiency. It would strengthen the paper to include comparisons with \\nmore recent baseline methods.\\n2.\\tPaper structure: Some parts of the paper are redundant, but some important parts are not clearly explained, and the method section of the article is too thin.\", \"questions\": \"The theoretical guarantees support LipFed\\u2019s fairness, but additional results under different\\nclient settings would strengthen the paper. Additionally, I\\u2019m curious to see the results with \\nmore clients, if possible.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the challenge of mitigating *subgroup* bias in federated learning (FL). The authors begin by introducing the concept of *subgroup* fairness in the FL setting, distinguishing it from the more widely studied topic of *group* fairness in FL. They then propose LipFed, a technique to mitigate subgroup bias by incorporating Lipschitz-based fairness constraints into the learning process. The paper also provides upper bounds on group and subgroup fairness within the LipFed optimization framework. Experimental results across multiple datasets demonstrate that LipFed effectively reduces subgroup bias with minimal impact on overall utility, while also revealing a trade-off between subgroup and group fairness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) While most bias mitigation methods in FL focus on addressing group bias, LipFed is a novel technique designed specifically to mitigate subgroup bias in FL.\\n\\n2) LipFed introduces the integration of Lipschitz-based constraints into the learning process in an FL setting, creating a compelling link between individual fairness (Dwork et al., 2012) and subgroup fairness in FL.\\n\\n3) LipFed is versatile and can be readily applied to other FL algorithms, such as TERM and AFL, to reduce subgroup bias. Empirical results demonstrate its effectiveness in subgroup bias reduction.\", \"weaknesses\": \"1) While LipFed is designed to address subgroup bias in FL, the main text does not clearly explain how it preserves client privacy. Although Section 5.5 shows that LipFed achieves similar subgroup discrepancy and average accuracy as non-DP LipFed when tested with varying $\\\\epsilon$ values, this result is based on experiments with only two datasets. There is no theoretical guarantee that LipFed will preserve privacy in the general case (in terms of differential privacy). Specifically, it is unclear how client $k$ accesses $R(X_{k'}^g; \\\\theta)$. How does client $k$ compute this loss without access to the subgroup data of other clients?\\n\\n2) Theoretical results, particularly Theorem 4.2.2, are informative but lack clarity regarding their implications and the relationship between subgroup and group bias. While the theorem provides an upper bound for group bias under specific assumptions, it is unclear how this result adds value to the paper or aids in understanding. A section explaining the precise implications of this theorem would be beneficial.\\n\\n3) The main text contains several inconsistent and incorrect notations, some of which I highlighted in the questions section. These errors make it difficult to follow the paper\\u2019s main contributions and should be addressed for clarity.\", \"questions\": \"Here are some of my questions and comments. I would be willing to increase my score if my concerns are addressed:\\n\\n1) Some notation inconsistencies are confusing. Unifying the notation would improve readability and make the paper easier to follow:\", \"line_122\": \"$y_k \\\\in Y_k^n$ should be $y_k^n \\\\in Y_k$.\", \"line_134\": \"$F_k$ should be $R_k$.\\nLines 141\\u2013145: The distinction between $n_k$ and $N_k$ is unclear, and they seem to be used interchangeably.\", \"lines_145_and_196\": \"The indexes in $Disc({a^{g,k}})$ are sometimes over $g$ and sometimes over $k$. Please clarify in the text what these indexes represent.\", \"line_237\": \"Are $X_k^g$ and $X_{k'}^g$ referring to individuals or subgroups? Definition 3.1 seems to apply to individuals, so this distinction would be helpful.\\n\\n2) What does $I(x,y)$ in line 186 represent? Does this imply that noise is added to all elements of the input, including the features and labels?\\n\\n3) Why is the median used to compute the measure of unfairness? In fairness literature, fairness metrics are typically defined based on the worst-case scenario (e.g., demographic parity, equalized odds).\\n\\n4) In Figure 7, LipFed provides both the best group and subgroup fairness for some datasets (e.g., ACSE) but achieves the best subgroup fairness with the worst group fairness for others (e.g., ACSI). Do you have any intuition about why these differences occur across datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the problem of subgroup fairness in federated learning (FL). The motivation is that previous algorithms designed for group fairness in FL may be unable to achieve subgroup fairness across clients, due to a potential feature skew across clients which makes subgroups non-IID in terms of their feature distribution. Hence, ensuring fairness across both intersectional subgroups and broad groups is necessary. To achieve this goal, the work ensures equitable model performance across diverse subgroups by adding a regularization term to the objective function of FedAvg, which penalizes high discrepancy in clients' subgroup losses. By experimental results, they show that the algorithm can reduce subgroup unfairness in FL settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"There are some strengths in the paper, listed below:\", \"The considered problem is an interesting problem, which seems to be not studied before.\", \"An extensive discussion of the existing related works on group/subgroup fairness in centralized/FL settings is done in the appendix.\", \"The proposed idea is evaluated on four datasets and multiple baseline algorithms.\"], \"weaknesses\": [\"Despite the mentioned strengths, the work has multiple weaknesses, as listed below. I will clarify further about them in my questions.\", \"The writing of the paper needs to be improved, as there are multiple typos, ambiguities and few wrong claims.\", \"The proposed algorithm, LipFed, is addressing an interesting problem, but it has a heuristic nature: it introduces multiple parameters, more precisely $\\\\epsilon, t$ (the regularization weight), $w_{g,k}$ (the importance weights) with limited heuristic methods for setting them, which makes me hesitate about the applicability of the proposed approach in real scenarios. The authors have mentioned this in the appendix as a limitation of the work.\", \"LipFed, induces multiple constraints that may not be addressable for different models and datasets. This also induces computational overheads, which the authors have mentioned in the appendix as a limitation. A study of the induced computational overhead compared to that of simpler algorithms, e.g., FedAvg, will make it more clear whether the overhead is tolerable or not.\", \"The theoretical analysis in Theorem 4.2.2 barely delivers a clear message.\", \"The experimental evaluations need to be improved to include some other baselines and some ablation studies.\", \"I will ask more detailed questions in the following to clarify about my opinion.\"], \"questions\": \"In the following, I highlight the questions that are more important to me. A short answer, in one or two lines, will suffice for the other questions.\\n\\n1. Although the authors start LipFed's proposal with discussion about Lipschitz property, in practice it is not used in LipFed at all. What LipFed does in practice is regularizing the objective function of FedAvg with a measure of loss discrepancy across subgroups (for every existing group). As stated in the paper, the main barrier for using the Lipschitz property between two subgroups $g_k$ and $g_{k'}$ (of a group $g$) is that there is no way in FL to measure dissimilarity of features between the two subgroups.\\n\\n2. (important) In equation (6), where the subgroup fairness constraint is introduced, the goal is to make the subgroup losses of the existing $K$ clients close to each other (for all existing groups $g$). This is done by penalizing the discrepancy of clients' subgroup losses from the average subgroup loss (on the same group). The importance weights $w_{g,k}$ assigned to client $k$ is equal to the inverse of the average feature variance in the subgroup (client) $k$ (the feature corresponding to the group $g$), as stated in line 288. However, as you have clearly mentioned in Example 1, the intra-client feature variance can be irrelevant to inter-client feature variance: in example 1, and for the group $g=$\\\"women\\\", the variance in each subgroup $g_1$ (client 1) and $g_2$ (client 2) is small, while the variance between the two subgroups is large (from mostly white images in client 1 to mostly black images in client 2). Does LipFed assign a larger importance weight to clients 1 and 2 compared to another third client which has a mix of white and black images (i.e. has a higher intra-client feature variance)? This method of setting $w_{g,k}$ seems heuristic to me. What if the data distribution in each client is iid, but it is non-iid across clients with high variance? This challenge for setting the weights rises from the limitation that we can not measure feature dissimilarity across clients in FL.\\n\\n\\n3. (important) Other than the importance weights above, $\\\\epsilon$ and $t$ need to be set \\\"carefully\\\", and the algorithm is sensitive to them. An ablation study on these two parameters to show the sensitivity of LipFed's performance to them will be clarifying. \\n\\n4. (important) In the experimental results, only the GiFair baseline is on group fairness. TERM and AFL baselines are on client fairness. FedAvg is neither group-fair nor client-fair. LipFed is focusing on subgroup fairness, and needs to be compared to both the above categories to study whether it induces any costs on client fairness or group fairness. Comparison to more complex algorithms, e.g., FCFL [1], is vital. FCFL addresses both client and group fairness by min-max optimization across clients and enforcing \\\"Equal Opportunity\\\" (equal TPR) across sensitive attributes (groups). In contrast, LipFed does minimization across clients to minimize the average client loss regularized by subgroup loss discrepancy, eq (7). Hence, FCFL is the best baseline to compare with. This suggests a way to improve the experimental results of the paper. \\n\\n[1] S. Cui, et.al., \\\"Addressing algorithmic disparity and performance inconsistency in FL\\\", 2021. \\n\\n\\n5. (important) Another important result that can provide a clear insight about LipFed is an ablation study on the feature skew (non-IIDness) across subgroups. This can be done by an ablation study on the noise variance $\\\\sigma^2$ (in line 186 and Table 2 in the appendix). This is also related to question 2, above. \\n\\n\\n6. The statement of Theorems 4.2.1 and 4.2.2 need to be improved and more precise. For example, in Th. 4.2.1, where is the model $h$ coming from? from LipFed? Also, theorem 4.2.2 barely delivers a clear message to me. LipFed is not addressing group fairness at all in its objective function, and just focuses on subgroup fairness. How come can it provide any guaranty on the group fairness of the final model? \\n\\n7. What is $G$ (the number of groups) in your experiments? More precisely, how many sensitive attributes, e.g., race, gender, exist in your datasets?\", \"minor_comments_to_improve_the_writing_of_the_paper\": [\"line 122: it should be $Y_k$ (not $Y_k^n$)\", \"line 123: the number of samples in group $g$ at client $k$ should depend on $k$. I think $N_g$ should be changed to something like $N_{g,k}$.\", \"line 134 (or 135): it is $R_k$ (not $F_k$).\", \"eq (1): $\\\\theta_k$ should change to $\\\\theta$ (one model parameter is learned and all clients use the same model parameter)\", \"line 141: the number of subgroups in a group $g$ should depend on $g$. So $n_k$ should be changed to something like $n_g$.\", \"line 144: both $a_1^{g_{g,k}}$ and $a_2^{g_{g,k}}$ need to be changed to something like $a_1^{g,k}$ and $a_2^{g,k}$ to be compatible with eq (2). Also, $k$ changes from 1 to $n_g$ (defined above).\", \"In many lines, the latex command \\\\cite{} needs to be changed to \\\\citep{}. For example, in line 149 (or 150).\", \"subgroup fairness and group fairness metrics need to be edited in lines 195 and 199, following g the suggested notation modifications above.\", \"line 215, there is a typo.\", \"line 239, there is a typo.\", \"eq (6): on the right side of the inequality, $g$ should change from 1 to $G$ (not $n_g$). Also, before the inequality, $k^{i}$ should change to $k'$.\", \"line 281: $w_{g,k}$ should change to $w_{g,k'}$\", \"in Fig 6, Y axis label should change to \\\"subgroup discrepancy\\\".\", \"All in all, the considered problem is interesting, but the draft needs to be improved.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work the authors focus on the problem of subgroup fairness in the context of federated learning. The authors provide a new algorithm relying Lipschitz-based fairness constraints. The method comes with provable fairness guarantee and empirical performance seems good on real world datasets.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"I think the study of subgroup fairness is interesting in the context of federated learning.\", \"The proposed method, which borrows idea from individual fairness has some potential.\"], \"weaknesses\": \"- Notations and equations are confusing.\\n1. E.g. $R_k$ in Equation (1) is defined without explanation. I assume the authors mean $F_k$. \\n2. Definition 3.1, $\\\\Delta(A)$ is not defined anywhere. \\n3. Equation 4 is also confusing. The objective to be optimized only contains $X_k$. Shouldn't it contain all $k$ and the constraint contains all $k,k'$ pairs?\\n4. Equation 5, missing brackets. Also shouldn't it be $k'$ instead of $k$ inside the function $D$?\\n5. Equation 6, somewhere it's $k^i$, somewhere it's $k'$.\\n- Equation 6 seems different from authors' claim in L279-280 (\\\"the difference between the loss of a subgroup on client k and the aggregated losses of the same subgroup across other clients k\\u2032 is small\\\"). Equation 6 is enforcing a universal bound $\\\\epsilon$ for the *sum over all groups* of these differences. Could the authors explain that?\\n- Theorem 4.2.1 could be vacuous. $\\\\epsilon$ is a parameter that controls the loss, $\\\\Gamma$ characterizes heterogeneity via loss. Hence the upper bound is a product of two parameter in the loss space. However, by definition, LHS is the difference in TPR, which has a naive upper bound of 1. Since there isn't any control over $R_k$, the upper bound is not even guaranteed to be smaller than 1.\\n- The motivation of the work is unclear. From my understanding, subgroup information is unknown therefore you can't directly apply prior group fair FL algorithm to the subgroups directly? Further, are the authors aiming to protect local subgroup fairness (achieving fair prediction on each subgroup at each local client), or global subgroup fairness (achieving fair prediction on each subgroup across the entire network)? In both case, since subgroup information is not known ahead, how do you evaluate that the subgroup fairness is achieved?\\n- Missing comparisons with a lot of group fair FL baselines in the experiment section, including FCFL [1], FedFair [2], FairFed [3], FedFB [4], PFFL [5], etc.\\n- The authors only seems to measure on Equal Opportunity. How about other group fairness metrics such as Demographic Parity and Equalized Odds?\\n\\n[1] Cui, S., Pan, W., Liang, J., Zhang, C., & Wang, F. (2021). Addressing algorithmic disparity and performance inconsistency in federated learning. Advances in Neural Information Processing Systems, 34, 26091-26102.\\n\\n[2] Chu, L., Wang, L., Dong, Y., Pei, J., Zhou, Z., & Zhang, Y. (2021). Fedfair: Training fair models in cross-silo federated learning. arXiv preprint arXiv:2109.05662.\\n\\n[3] Cui, S., Pan, W., Liang, J., Zhang, C., & Wang, F. (2021). Addressing algorithmic disparity and performance inconsistency in federated learning. Advances in Neural Information Processing Systems, 34, 26091-26102.\\n\\n[4] Zeng, Y., Chen, H., & Lee, K. (2021). Improving fairness via federated learning. arXiv preprint arXiv:2110.15545.\\n\\n[5] Hu, S., Wu, Z. S., & Smith, V. (2024, April). Fair federated learning via bounded group loss. In 2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 140-160). IEEE.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
CFOQd4tqn1
Ctrl123: Consistent Novel View Synthesis via Closed-Loop Transcription
[ "Hongxiang Zhao", "Xili Dai", "Jianan Wang", "Shengbang Tong", "Jingyuan Zhang", "Weida Wang", "Lei Zhang", "Yi Ma" ]
Based on the success of large image diffusion models, multi-view diffusion models have demonstrated remarkable zero-shot capability in novel view synthesis (NVS). However, the pioneering work Zero123 struggles to maintain consistency across generated multiple views. While recent modifications in model and training design have improved multi-view consistency, they often introduce new limitations, such as restricted fixed view generation or reliance on additional conditions. These constraints hinder the broader application of multi-view diffusion models in downstream tasks like 3D reconstruction. We identify the root cause of inconsistency as the excessive diversity inherent in generative models utilized for the NVS task. To address this, we aim to utilize the stronger supervise information to better alignment with ground truth images to constrain the diversity, and propose Ctrl123, a **closed-loop** transcription-based multi-view diffusion method that enforces alignment in the CLIP patch feature space. Extensive experiments demonstrate that Ctrl123 excels in **arbitrary** novel view generation, significantly improving multi-view consistency compared to existing methods.
[ "Novel view synthesis", "Diffusion Model", "Closed-Loop Transcription" ]
Reject
https://openreview.net/pdf?id=CFOQd4tqn1
https://openreview.net/forum?id=CFOQd4tqn1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tsE0JLhlgX", "p5i4jt8ifi", "nYZYzzvXtW", "gVQsRGq3Ja", "ThZ3xlg32f", "DtcVqcTugI", "CdcsU2eQd1" ], "note_type": [ "decision", "meta_review", "official_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1737523479624, 1734796898801, 1730679113030, 1730566767351, 1730696923606, 1729548783450, 1730309623808 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1998/Area_Chair_DKq1" ], [ "ICLR.cc/2025/Conference/Submission1998/Reviewer_Hca8" ], [ "ICLR.cc/2025/Conference/Submission1998/Reviewer_vgN2" ], [ "ICLR.cc/2025/Conference/Submission1998/Reviewer_4Bxn" ], [ "ICLR.cc/2025/Conference/Submission1998/Reviewer_dbKQ" ], [ "ICLR.cc/2025/Conference/Submission1998/Reviewer_NPue" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper introduces Ctrl123, which aims at improving multi-view consistency in novel view synthesis by aligning generated views with ground truth in the CLIP patch feature space. It extends Zero123 by fine-tuning with modified loss functions, including a closed-loop transcription cost. This approach enhances multi-view consistency while supporting arbitrary camera placements.\", \"the_common_strengths_identified\": \"(1) its innovative use of CLIP-based patch features and closed-loop transcription to enhance multi-view consistency; (2) its flexibility to generate arbitrary views while improving NVS quality; and (3) its extensive experimental validation and new metrics like Angle Accuracy and Mask IoU.\", \"the_major_weaknesses_include\": \"(1) the method is a minor tweak to Zero123, lacking significant novelty or broader applicability; (2) high computational costs due to unrolling in training; (3) unclear presentation, with missing supplementary materials like videos to demonstrate multi-view consistency; and (4) no comparative analysis with Zero123++ or inclusion of key metrics like LPIPS.\\n\\nThe overall ratings are consistently below the broardline and the rebuttal is missing, thus, the ACs agreed on rejection.\", \"additional_comments_on_reviewer_discussion\": \"No rebuttal was found.\"}", "{\"summary\": \"This paper addresses the problem of object-centric novel view synthesis from a single input image. It builds upon a pretrained Zero-123 model and applies rounds of mixed closed-loop training and standard diffusion training on the Objaverse dataset. In closed-loop training, the novel view rendering loss is optimized in the CLIP feature space over 50 denoising steps. The authors first demonstrate the effectiveness of closed-loop training through an overfitting experiment on 25 objects, showing significantly improved PSNR scores. They then compare their approach with recent state-of-the-art methods on GSO, RTMV, and OmniObjects3D, showing enhanced performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The author achieves state-of-the-art results on three common benchmarks!\\n2. The author designs Angle Accuracy and Mask IoU to closely evaluate the 3D consistent, which is interesting. \\n3. The authors conduct extensive ablations on the design space of the closed-loop reconstruction loss, investigating factors such as the number of denoising steps, the feature extractor (VAE vs. CLIP), and the effectiveness of simultaneous vs. alternating training.\", \"weaknesses\": \"1. The proposed methods is very computationally heavy, back-propogating from 50 steps of network evaluations. Can the authors provide more details about the speed and memory cost of the proposed closed-loop training?\\n\\n2. The methods sections is not writting very clearly (e.g. section 3.2)! A few concise paragraphs presenting the loss function and a diagram illustrating backpropagation through 50 denoising steps would improve readability. Since I am not familar with the Closed-Loop paper, (cited in line 258), and it takes me a long time to understand section 3.2. \\n\\n3. I recommend the authors cite relevant works on alignment for diffusion models, such as \\\"Directly Fine-Tuning Diffusion Models on Differentiable Rewards.\\\" The method in this paper is conceptually similar, with MSE in the CLIP feature space serving as a reward signal.\\n\\n4. Figure-2 has formatting errors.\", \"questions\": \"1. Out of curiosity, what are the computational and memory costs of optimizing through the full denoising steps? Was any heavy engineering work required to make training efficient?\\n2. Have the authors considered using LPIPS instead of MSE in the CLIP feature space? Since GRM, LGM, GS-LRM both uses LPIPS for better visual quality. My feeling here is that the CLIP loss in this paper is similar to a LPIPS loss. \\n3. The authors report that alternating between closed-loop and diffusion training produces the best results. Have you tested performance when using only closed-loop training (e.g., no diffusion training after 500 steps of closed-loop training)? Additionally, what is the performance drop if closed-loop training is removed and the saved computational resources are allocated to additional diffusion training, as in the setup for Table 2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a novel multi-view diffusion model named Ctrl123, which significantly improves multi-view consistency in novel view synthesis (NVS) while retaining the flexibility of generating arbitrary novel views. The method addresses the core issue of excessive diversity in generative models that leads to inconsistencies, by leveraging a closed-loop transcription-based framework that enforces alignment in the CLIP patch feature space. The main contributions include the introduction of the Ctrl123 method, improved training alignment with ground truth, and performance gains shown through experiments on training objects and a large-scale 3D dataset, Objaverse.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Ctrl123 effectively improves multi-view consistency in novel view synthesis while maintaining the flexibility to generate arbitrary novel views. This addresses a key challenge in existing methods by balancing consistency with generation diversity.\", \"Also, the training strategy of Ctrl123 demonstrates improvements in quantitative metrics, such as PSNR and SSIM, as well as rotation accuracy metrics (AA and IoU). These results show that Ctrl123 not only aligns better with ground truth images but also surpasses the performance of prior models like Zero123.\", \"Finally, when trained on the large-scale 3D dataset Objaverse, Ctrl123 maintains its performance across different evaluation datasets, showcasing its scalability and robustness in handling diverse 3D data while improving view consistency and rotation accuracy.\"], \"weaknesses\": [\"The biggest drawback of this paper lies in the difficulty of understanding how performance and results have improved specifically. The LPIPS metric (used in Zero123++) is not mentioned in the evaluation, and it might be worth considering if the authors could add this to their assessments. Could you include the LPIPS metric in the evaluation and comparison?\", \"The paper does not cite Zero123++ in the abstract, but it indirectly refers to Zero123++, by mentioning its fundamental limitation by mentioning 'restricted fixed view generation.' Does this mean the performance of Ctrl123 is better than Zero123++? However, there is no comparative analysis with Zero123++, so it is unclear what has improved in this paper. For the rebuttal process, could you present a comparative analysis with Zero123++? If there is a methodological reason for not including a comparative analysis with Zero123++, it would be helpful to explain why such a comparison was not included.\", \"It is stated that an in-depth analysis was conducted on a set of 25 objects. However, the current results seem to focus solely on simple, specific objects such as toys, dolls, or avatars for NVS. Can the method handle more complex objects for NVS? It seems that Zero123++ tackles more challenging problems compared to this paper. If good results are also produced for more complex sets, I suggest reflecting this in the rebuttal process.\", \"Ctrl123 claims to enforce alignment using CLIP, but how would it have been if the evaluation included a CLIP score? It would be beneficial to include the CLIP score in the evaluation metrics and discuss how the CLIP score relates to alignment and consistency improvements.\", \"Due to these points, it is difficult to objectively evaluate the results. While the claim that consistency is achieved can be acknowledged, it is challenging to give a high score due to the issue of objectively evaluating the results.\"], \"questions\": \"Mentioned in the weaknesses section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a method labelled Ctrl123 that is an extension of the Zero123 model for new view synthesis. The motivation is to improve the accuracy with which the resulting generated images reflect the shape of a viewed object viewed from a particular angle, and to improve the generality of camera placement for the new image. Specifically the authors seek to encourage shape and texture consistency between multiple synthetic views of a single generated object, rather than have the model generate different shapes for each new view. They also aim to have the model render images that accurately reflect the impact of a given change in camera pose. The proposed model extends the Zero123 model by finetuning with altered loss functions that reflect their goals. They particularly introduce a closed-loop cost which compares rendered new views against ground truth.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The experiments show that the method improves the performance measures chosen, and that the proposed method generates images that depict more consistent geometry than do the images generated by Zero123.\", \"the_paper_offers_the_following_insights_which_are_of_interest\": [\"path features are more informative than class features in training new view synthesis models\", \"applying an MSE-based loss in the latent space of an autencoder leads to increasing vulnerability to training collapse\", \"Section 3.1 provides a good explanation of current NVS methods\"], \"weaknesses\": \"The primary contribution of the paper is a method for finetuning Zero123. The fact that the proposed method applies only to Zero123 means it is of limited significance, or interest to the rest of the field.\\n\\nThe change to Zero123 is cosmetic rather than fundamental, and is not transferable to other methods.\\n\\nThe presentation of the paper is good and bad. It is well laid out, and effort has definitely been applied to survey the relevant literature. The English and the mathematics are very difficult to read, however, to the point where the Introduction is hard to parse. This is particularly true of the literature review in the Introduction which lists many papers but leaves the reader more confused than illuminated. Effort has been made with the maths, but there is no consistency.\", \"some_indicative_examples_of_the_problems_with_the_presentation\": [\"There are at least 3 problems with the first sentence of the body of the paper: \\\"Recent advancements in novel view synthesis (NVS) have sparked considerable excitepment on 3D generation\\\"\", \"\\\"Although the modified task settings, like generating fixed-view multiple views\\\"\", \"The first three mathematical quantities introduced are \\\\bf{X}, \\\\bf{\\\\Delta R T}, and \\\\bd{X}_{tg}. These are latter explained to be an image, a tuple of matrices, and an image, respectively, despite the fact that they have the same notation. Using a symbol to mean one thing, then the same symbol with a subscript to mean something different is challenging.\", \"Line 211 introduces a decoder 'g()' as being parameterised by \\\\eta, despite there being no \\\\eta shown.\", \"Equation 3 minimizes over \\\\eta^* despite the fact that this variable is not a parameter to the expression to be minimized\", \"g() has variable numbers of parameters, and none of them are \\\\eta^*\"], \"questions\": \"I don't have any questions\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper consider the problem of generating novel views of an object conditioned on an input image and a camera viewpoint change. It starts from the Zero-1-to-3 formulation and adds to it an additional loss, whose purpose is to reduce the diversity of images generated by the model, with the hope to focusing on increasing multi-view consistency. This additional loss simply compares the generated and ground truth target images in the CLIP feature space after unrolling the denoising process. The resulting new-view generator is compared to prior works using several sensible metrics and is shown to perform convincingly better.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The method is a relatively simple tweak on Zero-1-to-3 which seems to improve NVS quality significantly. This is something that may be useful in practice.\", \"The use of MegaPose for assessing generated viewpoints is nice.\", \"The proposed tweak is simple conceptually, although it may take significant amounts of GPU memory to implement due to the unrolling.\"], \"weaknesses\": [\"The method is a relatively simple tweak on Zero-1-to-3 which seems to improve NVS quality significantly. This is something that may be useful in practice.\", \"The use of MegaPose for assessing generated viewpoints is nice.\", \"The proposed tweak is simple conceptually (although it may take significant amounts of GPU memory to implement due to the unrolling)\", \"I could not find any supplementary material, and in particular no videos. This is odd for this paper, as videos are a great way for assessing qualitatively multi-view consistency.\"], \"questions\": \"Clarity: The authors should really clarify that the function g() unrolls a large number of denoising steps. This is only clear at the very end of the paper in line 523 and is *crucial* to understand why the proposed approach makes sense. Without unrolling, there is little difference between the denoising objective used in DDPM and the proposed \\\"closed loop\\\" regression loss. Please consider adding this in the method description.\\n\\nLikewise, the authors may want to elaborate more on why applying the regression loss after unrolling the denoiser tends to make the latter regress to the mean, killing its variance. They should also explain better how this connects to the choice of applying this loss not in VAE space or RGB space, but, further, in CLIP space, in order to \\\"soften\\\" variance killing and hence mode collapse.\\n\\nThe point of unrolling should probably be made quite evident in Figure 2 too, making it an obvious part of the diagram. This is a good place to show how one loss is applied after unrolling.\\n\\nNovelty is relatively modest, but then again the paper introduces a simple tweak that does result in SoTA performance, which is nice. This is a well explored art and it is not trivial to outperform prior works with a simple change to a relatively old baseline. However, the fact that no videos are given in the supplementary material makes me wonder how well these claims hold up qualitatively. The images in the paper seem fine to me, but showing an animation can really make most inconsistencies jump out.\\n\\nThe authors should discuss a bit more the issue with unrolling in terms of memory impact, and do so in the main paper. 50 steps of unrolling mean that the method can multiply GPU memory fifty-fold, meaning that only ultra-tiny training batches are possible, I presume.\\n\\n# Minor issues\\n\\n* You should have a space (or punctuation) after citations\\n* Zero123 is not styled correctly; the paper is called Zero-1-to-3. You should try to match as closely as possible the styling used by the original authors. Also, it is a \\\"to\\\", not a \\\"two\\\", in the middle.\\n* Line 212: the \\\"decoder\\\" is usually stochastic, not a function. In general, I found the notation a little confusion as f() is a function, and g() is a distribution, but they are marked in the same way in the diagrams. There is no indication that g() is sampled from.\\n* Eqs (5) to (9): notation is essentially repeated twice, which seems unnecessary\\n* Fig (4): you could have included ground truth new viewpoints for reference\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"The proposed model is an image generator, hence potentially sensitive, but it is mostly trained on relatively unproblematic dataset. Furthermore, the specific training done here is likely to increase risks of the third-party base models.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes Ctrl123, a multi-view diffusion model that improves view consistency in novel view synthesis. By enforcing alignment with ground truth images in the CLIP patch feature space, Ctrl123 reduces excessive diversity in generative models, achieving consistency across views.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper introduces a novel approach, Ctrl123, which uniquely combines closed-loop transcription with CLIP-based patch features to enhance multi-view consistency in novel view synthesis. By addressing the issue of excessive diversity in generative models, this method removes a significant limitation of prior approaches like Zero123, improving applicability to NVS tasks.\", \"weaknesses\": \"1. This article lacks true innovation; simply adding a CLIP feature loss between ground truth and the predicted image does not constitute a novel contribution, as CLIP loss has already been employed extensively in previous works. For example, [1] has used clip feature space alignment to supervise the reconstruction of the scene, which is exactly the same as your method.\\n2. The authors spend too much space in Sec 3.1 to explain the previous work. You should try to simplify the content in Sec 3.1 and use more space to explain your own work. In addition, your formulas 2, 4, 5, 6, and 8 are too similar. You can consider simplifying the repeated content.\\n\\n[1] Jain A, Tancik M, Abbeel P. Putting nerf on a diet: Semantically consistent few-shot view synthesis[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 5885-5894.\", \"questions\": \"1. I suggest that the author draw inspiration from other works on multi-view consistency to enhance the novelty of their approach. For example, [2] improves consistency by optimizing ray casting, while [3] achieves this by creating a smooth camera trajectory. More detailed innovations could be explored in the image generation process, such as adding geometric constraints, applying regularization, optimizing pixel rays, or incorporating more explicit 3D modeling to strengthen consistency, rather than simply adding a CLIP feature loss.\\n2. How does the author select the small dataset used to test alignment capability in Table 1? Is it representative? A similar question arises in the SOTA comparison in Section 4.3, where only 20 objects are used for quantitative analysis. Using more objects would strengthen the evidence for the method's generalization ability.\\n3. For 3D reconstruction, the author should include comparative experiments with other methods.\\n\\n[2]\\tSeo S, Chang Y, Kwak N. Flipnerf: Flipped reflection rays for few-shot novel view synthesis[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 22883-22893.\\n[3]\\tKwak J, Dong E, Jin Y, et al. Vivid-1-to-3: Novel view synthesis with video diffusion models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 6775-6785.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
CFMdrcK935
Decomposition of one-layer neural networks via the infinite sum of reproducing kernel Banach spaces
[ "Seungcheol Shin", "Myungjoo Kang" ]
In this paper, we define the sum of RKBSs using the characterization theorem of RKBSs and show that the sum of RKBSs is compatible with the direct sum of feature spaces. Moreover, we decompose the integral RKBS $\mathcal{F}\_{\sigma}(\mathcal{X},\Omega)$ into the sum of $p$-norm RKBSs $\\{\mathcal{L}\_{\sigma}^{1}(\mu\_{i})\\}\_{i\in I}$. Finally, we provide some applications to enhance the structural understanding of the integral RKBS class.
[ "Neural networks", "Reproducing kernel Banach spaces", "Class of Integral RKBSs" ]
Reject
https://openreview.net/pdf?id=CFMdrcK935
https://openreview.net/forum?id=CFMdrcK935
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z1B1I4DhOC", "rFiSsVA13L", "qm3npzsttb", "pw6H17cmAf", "o4hALkkvcY", "n2MSgn9ND2", "lUFHHBlpv9", "kKRg4DUvEG", "k8Caf1XnV7", "iPT5ggrFHo", "gmxqJnQYI9", "gfLwH75Qwv", "ey5nT2STgo", "dwpOLPGRQt", "d5ZSgoIhAq", "c5Pl9tWH8o", "bUrsP3Y8I7", "XW9wu3WY7v", "WxCmDf9OKv", "VgtE3ZmZPJ", "UxsmOABYCf", "TC0CfWjKTq", "SLdvLXVKdf", "OhPkIeQz6t", "Le0vfiJsMq", "KiA4Yyaccm", "HOZO0KFla8", "F58cTHGzda", "AbSYjBApeC", "8fBaGVXph9", "6OjIOf4yL0", "60h7UWk2PI", "3xS8KdQAGV" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1735252773884, 1732532607372, 1732507294278, 1732477931672, 1732540294436, 1731945477091, 1732625276568, 1731999293900, 1732360244115, 1731997846653, 1731946674813, 1732616466983, 1732175645669, 1732359376047, 1732624748708, 1732513582353, 1732360062500, 1732000010036, 1732432158406, 1732176213812, 1730103431007, 1731998741670, 1732602421491, 1731949499915, 1737523619789, 1730581469795, 1730506473335, 1732283140090, 1732356761975, 1731949152002, 1730600065279, 1731947709034, 1731948507638 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4117/Area_Chair_zfA3" ], [ "ICLR.cc/2025/Conference/Submission4117/Reviewer_Ubdx" ], [ "ICLR.cc/2025/Conference/Submission4117/Reviewer_UkFb" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Reviewer_mrzL" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Reviewer_Ubdx" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Reviewer_Ubdx" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Reviewer_mrzL" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4117/Reviewer_UkFb" ], [ "ICLR.cc/2025/Conference/Submission4117/Reviewer_Ubdx" ], [ "ICLR.cc/2025/Conference/Submission4117/Reviewer_t3f6" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Reviewer_t3f6" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ], [ "ICLR.cc/2025/Conference/Submission4117/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The submission studies reproducing kernel Banach spaces. RKBS arise in connection with infinitely wide neural networks; the paper also posits them as a relevant hypothesis space for learning problems due to their properties of completeness and pointwise convergence. The paper has two main results. The first (Prop 4.2) shows the compatibility of RKBS with the direct sum operation, namely, the sum of RKBS is isometrically isomorphic to an RKBS. The second (Thm 4.4) states that an RKBS in integral form can be written as a sum of L1 spaces, with different feature distributions.\\n\\nAs described below, reviewers initially produced a mixed evaluation of the paper. On the positive side, the paper is mathematically solid, and contributes several results on the structure of reproducing kernel Banach spaces. As the paper notes, these spaces are less studied than reproducing kernel hilbert spaces. Whereas the summability of RKHS captures the hypothesis space associated with fixed first layer features, structural results on RKBS could shed light on networks with varying first layer features. The paper shows that an RKBS in integral form can be decomposed into simpler spaces (L1 spaces). \\n\\nWhile the discussion provided useful context for appreciating the paper\\u2019s results, the paper would still benefit from a clearer and more accessible discussion of their implications for neural networks, as well as their significance in the broader program of establishing algorithms for solving learning problems in RKBS.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers found the paper mathematically solid, contributing several results on the structure of reproducing kernel Banach spaces. At the same time, several reviewers raised questions regarding the direct implications of this analysis for neural network learning.In particular, reviewers raised the following issues:\\n\\n- Mathematically dense presentation [t3f6, UkFb]\\n- Reviewers found the connection neural networks are unclear [t3f6, UkFb, Ubdx, mrzL], and were similarly unclear on the practical implications of the results. \\n\\nThe discussion also clarified the technical novelties of the paper (e.g., showing compactness of the operator M(\\\\Omega)). The discussion also clarified certain higher level aspects of the paper\\u2019s contributions. Roughly, there is a separation between RKHS (which correspond to fixed feature methods) and RKBS. Previous work has demonstrated the existence of an optimal solution in RKBS, but we currently lack a corresponding concrete algorithm for computing this solution. Structural results on RKBS could facilitate the development of such an algorithm.\"}", "{\"comment\": \"Thank you very much for your interesting and insightful comments. I understand that if we consider the case where the neural networks are represented as a finite sum, the corresponding models in RKHSs shoud also be represented by a finite sum of vectors. But if we use the representer theorem, a model in RKHS is often represented by a finite sum of feature maps even if we do not use any approximation techniques. Is this fact related to the understanding of the connection between the neural networks with a finite number of neurons and multiple kernel machines?\"}", "{\"comment\": \"Thank you for your clarifications. I see indeed parts of the draft have been rewritten and better explained. I have raised my score accordingly.\"}", "{\"comment\": \"Thank you so much for your response!\\nWe think this is truly an excellent question, and we have learned a lot thanks to you. First, what we intuitively showed in Proposition 5.2 is that neural networks have greater expressive power than kernel machines built with multiple kernels. The part you mentioned seems to be asking whether the reverse holds true in practical scenarios. In many cases, machine learning theory assumes an infinite-dimensional vector space as the hypothesis space. However, when we use a fixed, finite number of neurons, the hypothesis space becomes finite-dimensional. As you pointed out, in practical situations, instead of using an integral representation $f(x) = \\\\int_{\\\\Omega} \\\\sigma(x, w) d\\\\mu(w)$, we use a discretized finite sum with $m$ fixed neurons $$\\\\text{Equation 1}: f(x) = \\\\sum_{i=1}^{m} \\\\eta_{i} \\\\sigma(x, w_{i}) $$ to represent neural networks. If we train using $m$ fixed neurons, the function space representable by Equation 1 would be a finite-dimensional vector space. If we use $\\\\\\\\{\\\\sum_{i=1}^{m} \\\\eta_{i} \\\\delta_{w_{i}}: \\\\eta_{i}, w_{i} \\\\in \\\\mathbb{R}\\\\\\\\}$ as our feature space instead of the measure space $\\\\mathcal{M}(\\\\Omega)$ and develop the discussion further, we might show that models built with a finite number of certain multiple kernels are equivalent to neural networks with $m$ fixed neurons. Furthermore, the situation you described is akin to a scenario where we use Extreme Learning Machines [1] (finite-dimensional) instead of Random Fourier Features [2] (infinite-dimensional) for training a model. We agree that exploring this area further is indeed interesting, and we appreciate the opportunity to continue this dialogue. \\n\\n[1] Huang, G.-B., Zhu, Q.-Y., & Siew, C.-K. Extreme learning machine: theory and applications. Neurocomputing, 70(1-3), 489-501, (2006).\\n\\n[2] Ali Rahimi and Benjamin Recht. \\u201cRandom features for large-scale kernel machines\\u201d. In: Advances in neural information processing systems 20 (2007).\"}", "{\"comment\": \"The update looks good. I think the connection to neural networks can be strengthened even further. Nevertheless, with the current state I can see the point of the authors, which is sufficient to raise my score to 6. I am still quite intrigued by the fact that you need compactness of $\\\\Omega$ for the result to hold.\\n\\nRegarding the result of Neal (1996), he proves that when considering a Bayesian neural network under decent conditions (finite variance of priors, and bounded activation functions), the limit of the standardized infinite width regime converges to a Gaussian process (GP) with a known covariance function, by a simple application of CLT. The connection to GPs allows for a clear connection with the L2 space, and its corresponding reproducing kernel. Further results have obtained explicit covariances (e.g., Cho and Saul 2009, Lee et al 2018, de Matthews et al 2018, Yang 2020). \\n\\nAll of that to say that I strongly suspect that the compactness assumption could be dropped in one way or the other.\\n\\n##### References:\\n\\nYoungmin Cho and Lawrence Saul. Kernel methods for deep learning. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta (eds.), Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc., 2009. URL https://proceedings.neurips.cc/paper_files/paper/2009/file/5751ec3e9a4feab575962e78e006250d-Paper.pdf\\n\\nAlexander G. de G. Matthews, Jiri Hron, Mark Rowland, Richard E. Turner, and Zoubin Ghahramani. Gaussian process behaviour in wide deep neural networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1-nGgWC-.\\n\\nJaehoon Lee, Jascha Sohl-dickstein, Jeffrey Pennington, Roman Novak, Sam Schoenholz, and Yasaman Bahri. Deep neural networks as Gaussian processes. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1EA-M-0Z\\n\\nRadford M Neal. Priors for infinite networks. In Bayesian Learning for Neural Networks, Lecture Notes in Statistics, pp. 29\\u201353. Springer New York, New York, NY, 1996. ISBN 0387947248.\\n\\nGreg Yang. Tensor programs I: wide feedforward or recurrent neural networks of any architecture are Gaussian processes. In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), 2019. URL http://arxiv.org/abs/1910.12478\"}", "{\"comment\": \"Thank you for your insightful observations and attention to detail. The points you raised are excellent questions that not only prompt deep reflection for us but also for other readers. Moreover, many of the aspects you highlighted have been immensely helpful in allowing us to refine and further develop the content of our paper.\\n\\nWeakness1. The authors claim that there is a connection with neural networks, but do not make it clear nor precise. For example, the only mention of neural networks are in the introduction and a single mention in Subsection 3.3, without going into detail of the correspondence between the terms developed in the paper and neural networks.\", \"answer\": \"The RKBS triple $(\\\\Psi, \\\\psi, A)$ concept we use is analogous to what is suggested in traditional kernel methods (Hilbert space). To provide an intuitive explanation, through the map $\\\\psi: \\\\mathcal{X} \\\\rightarrow \\\\Psi^{\\\\*},$ we can view the information (points) from the data space $\\\\mathcal{X}$ as information in the high-dimensional abstract space $\\\\Psi^{*}$. This enables us to distinguish features of the data that could not be separated in the original data space $\\\\mathcal{X}$, but can be in $\\\\Psi^{\\\\*}$. The linear map $A$ serves to transform the information, now distinguishable in the abstract space, into a function space (RKBS) that we can learn. To help with further understanding, we would like to explain why the hypothesis space for neural networks is made into an RKBS in relation to Weaknesses 1 and 3.\\n\\nThe first reason for using RKBS and RKHS as hypothesis spaces in machine learning is that we expect our hypothesis space (function space) to satisfy at least completeness and pointwise continuity. This is because, when we aim to find a target function through a machine learning model, the approximation process of the target function relies on the metric (or topology) of the hypothesis space, and this approximation process must converge at least pointwise (the minimal assumption that two functions in the function space are close).\\n\\nThe second reason is that, in order to obtain the existence of a solution to the problem we are trying to solve through machine learning, we need to demonstrate the Representer theorem. In many cases, the Representer theorem is derived under the assumption of RKHS and RKBS. For this reason, in paper [1], the hypothesis space for neural networks is defined as the integral RKBS, and the Representer theorem is shown. However, demonstrating such existence does not directly lead to a concrete algorithm, which we believe is one of the main reasons why deep learning is often referred to as a black box.\\n\\n[1] Francesca Bartolucci et al. \\u201cUnderstanding neural networks with reproducing kernel Banach spaces\\u201d. In: Applied and Computational Harmonic Analysis 62 (2023), pp. 194\\u2013236.\"}", "{\"comment\": \"Thank you so much for taking the time to review our submission. We truly appreciate it!\"}", "{\"comment\": \"Weakness3.\\nAround 5 of the 8 pages are about the definitions or restating results in previous literature. It would be great if this work could spend some space on (1) the potential benefits of their results, (2) takeaway messages about RKBS, (3) technical difficulties encountered and solved, and (4) novel mathematical tools and techniques that are of independent interest. It is otherwise unclear what would be the central contribution of this work.\", \"answer\": \"Regarding Propositions 5.1 and 5.2:\\nWe mentioned them as examples where our theorem can be applied. Specifically, we model a well-known RKHS, $\\\\mathcal{L}\\\\_{\\\\sigma}^{2}(\\\\pi)$, which corresponds to infinite-width one-layer neural networks. In this setup, the input-layer parameters are drawn from a distribution and fixed, while only the layer-output parameters are updated during training. Then, we construct a finite sum of such RKHSs, $\\\\sum\\\\_{i\\\\in [n]}^{2}\\\\mathcal{L}\\\\_{\\\\sigma}^{2}(\\\\mu\\\\_{i})$, as a multiple kernel learning framework while preserving their RKHS structure.We show that this sum of RKHSs belongs to the integral RKBS. Intuitively, this means that the elements of the resulting sum of RKHSs model can be represented by a one-layer neural network. While our theorem primarily discusses RKHSs, as noted in Remark 5.3, the results can also be generalized to arbitrary $p$. The key point we want to emphasize is that we propose a methodology for embedding complex spaces composed of sums of RKBSs, which are often not well understood into well-known RKBSs. This approach could be particularly useful for ensemble methods involving multiple complex models.\", \"regarding_previous_literature\": \"You did not provide a reference, so we are unsure which work you are referring to. However, we assume that if you are referring to [1], we acknowledge that many of our notations and definitions were inspired by that reference. Nevertheless, we emphasize that we have clearly cited the sources of all the notations we have used. Additionally, we cautiously point out that the union of vector spaces does not generally retain a vector space structure. The methodology we propose for decomposing while preserving the RKBS structure is entirely original to our work.\\n\\n(1) The potential benefits of their results:\\nAs shown in the diagram in Weakness 2, we expect to develop an algorithm that approximates the solution guaranteeing existence in the Representer theorem for one-layer neural networks in a bottom-up manner.\\n\\n(2) Takeaway messages about RKBS:\\nWhen we define the hypothesis space (function space) in machine learning, we consider completeness and pointwise convergence as minimal assumptions for the properties of the function space. This is because, when approximating a target function through the metric (or topology) of the hypothesis space in machine learning, the approximating function and the target function must at least pointwise converge during the learning process. The key to using RKBS is that, through the characterization theorem (Theorem 3.3), the hypothesis space (RKBS) can always be thought of as a triple consisting of feature space, feature map, and RKBS map (i.e., ($\\\\Psi$, $\\\\psi$, $A$) in our submission). This allows us to understand the hypothesis space once we understand the feature space.\\n\\n(3) Technical difficulties encountered and solved:\\nTo handle infinite sums, we carefully verified and introduced related concepts ourselves in the Preliminaries section. More specifically, we did our best to introduce summability and several isometric isomorphisms, which allowed us to apply related concepts freely throughout our proof process.\\n\\n(4) Novel mathematical tools and techniques that are of independent interest:\\nOne of the isometric isomorphisms we used, namely equation (2.4), can be thought of as a simpler version of Kakutani's theorem, which classifies Abstract L-spaces. We believe that this theorem itself is quite interesting. Moreover, as mentioned earlier, we appropriately used summability and the isometric isomorphisms we introduced, as needed in our proofs. For example, in lines 650-664 of Proposition 4.2, we successfully transformed a norm defined by the double infimum using the derived properties of summability, even though we did not directly mention it. These methods are not necessarily obvious, and we think they are quite novel.\\n\\n[1] Len Spek, Tjeerd Jan Heeringa, and Christoph Brune. \\u201cDuality for neural networks through reproducing kernel Banach spaces\\u201d. In: arXiv preprint arXiv:2211.05020 (2022).\"}", "{\"comment\": \"Dear Reviewer t3f6,\\n\\nOnce again, we sincerely appreciate your valuable feedback :)\"}", "{\"comment\": \"Thank you for your valuable feedback. We acknowledge that our submission did not adequately explain the contribution and motivation behind our work. We will make sure to clarify these points to the best of our ability.\\n\\nWeakness1.\\nThe related work section is not informative. In particular, Section 1.1 does not introduce what are the advantages and, importantly to this paper, the limitations of RKHS, and it does not address the previous literature on RKBS nor what questions the literature has solved with RKBS. Also, it does not provide any motivation for the results presented in this work. It is mainly just a list of abbreviated references.\", \"answer\": \"Intuitively, it can be explained using the diagram below.\\n$$\\n\\\\\\\\{\\\\text{ family of singular probability measures } \\\\mu\\\\_{i} \\\\text{ on } \\\\Omega\\\\\\\\} \\\\rightarrow \\\\sum\\\\_{i\\\\in [n]}^{p}\\\\mathcal{L}\\\\_{\\\\sigma}^{p}(\\\\mu\\\\_{i}) \\\\overset{\\\\text{distance}}{\\\\hookrightarrow} \\\\mathcal{F}\\\\_{\\\\sigma}(\\\\mathcal{X},\\\\Omega) = \\\\sum\\\\_{i\\\\in I}^{1}\\\\mathcal{L}\\\\_{\\\\sigma}^{1}(\\\\mu\\\\_{i})\\n$$\\nAs mentioned in Weakness 1, although the Representer theorem for integral RKBS has been proven, no specific algorithm that guarantees the existence of a solution has been developed (and this is not obtainable with the algorithms we actually use for learning). Therefore, we first needed to decompose the integral RKBS while maintaining the structure of RKBS, and we have shown this. As you know, based on the theories from [1] and [6], we can model a one-layer neural network with infinite width by fixing the input-layer parameters chosen from a probability measure $\\\\pi$ and updating only the layer-output parameters, which corresponds to $\\\\mathcal{L}\\\\_{\\\\sigma}^{2}(\\\\pi)$ in our paper. Therefore, a concrete multiple kernel sum algorithm with the hypothesis space $\\\\sum\\\\_{i\\\\in [n]}^{2}\\\\mathcal{L}\\\\_{\\\\sigma}^{2}(\\\\mu_{i})$ exists. Ultimately, our goal is to develop an algorithm that guarantees the existence of a solution for one-layer neural networks in a bottom-up manner. This could be achieved by finding and minimizing a new distance, as illustrated in the diagram above.\\n\\n[1] Francis Bach. \\u201cBreaking the curse of dimensionality with convex neural networks\\u201d. In: The Journal of Machine Learning Research 18.1 (2017), pp. 629\\u2013681.\\n\\n[2] Chao Ma, Lei Wu, et al. \\u201cThe Barron space and the flow-induced function spaces for neural network models\\u201d. In: Constructive Approximation 55.1 (2022), pp. 369\\u2013406.\\n\\n[3] Francesca Bartolucci et al. \\u201cUnderstanding neural networks with reproducing kernel Banach spaces\\u201d. In: Applied and Computational Harmonic Analysis 62 (2023), pp. 194\\u2013236.\\n\\n[4] Len Spek, Tjeerd Jan Heeringa, and Christoph Brune. \\u201cDuality for neural networks through reproducing kernel Banach spaces\\u201d. In: arXiv preprint arXiv:2211.05020 (2022).\\n\\n[5] E Weinan and Stephan Wojtowytsch. \\u201cRepresentation formulas and pointwise properties for Barron functions\\u201d. In: Calculus of Variations and Partial Differential Equations 61.2 (2022), p. 46.\\n\\n[6]Ali Rahimi and Benjamin Recht. \\u201cRandom features for large-scale kernel machines\\u201d. In: Advances in neural information processing systems 20 (2007).\"}", "{\"comment\": \"Question1.\\nHow important is the compactness of $\\\\Omega$ for the main results? By the work of Neal (1996) we know that for decent densities (e.g., finite moments and bounded activation functions) we have a kernel similar to the kernel of $L^2$ stated in line 299, even when $\\\\Omega$ is unbounded.\", \"answer\": \"Yes, it is needed to construct a specific continuous function $\\\\sigma$. As seen in lines 828\\u2013834, we used the fact that $\\\\sigma(x,w,\\\\frac{i}{n}) = \\\\sigma_{i}(x,w)$. Furthermore, in order to reliably define the integral RKBS $\\\\mathcal{F}\\\\_{\\\\sigma}(\\\\mathcal{X}, \\\\Omega)$, we used it to make $\\\\sigma$ a continuous function defined on $\\\\mathcal{X} \\\\times \\\\Omega \\\\times [0,1]$.\\n\\nRegarding citation issues, minor notation issues, and minor grammatical mistakes:\\nThank you so much for kindly pointing out our shortcomings. It seems like a great learning opportunity for us. We will correct it and upload the revision. Once again, thank you\\n\\n\\n[1] Francis Bach. \\u201cBreaking the curse of dimensionality with convex neural networks\\u201d. In: The Journal of Machine Learning Research 18.1 (2017), pp. 629\\u2013681.\\n\\n[2] Rong Rong Lin, Hai Zhang Zhang, and Jun Zhang. \\u201cOn reproducing kernel Banach spaces: Generic definitions and unified framework of constructions\\u201d. In: Acta Mathematica Sinica, English Series 38.8 (2022), pp. 1459\\u20131483.\\n\\n[3] Conway, J. B. (1997). A Course in Functional Analysis (2nd ed.). Graduate Texts in Mathematics, 96. Springer-Verlag. ISBN 978-0387972459\"}", "{\"comment\": \"Thank you very much for your response. It helped me to understand the proposed framework well. Based on the rebuttal and the additional discussions, I updated my score.\"}", "{\"comment\": \"Thank you so much for your thoughtful and valuable feedback. We believe your question about whether the mathematical theories in our submission can be effectively evaluated through numerical experiments is particularly insightful and greatly appreciated.\\n\\nWeakness1.\\nThe paper could benefit from more illustrative examples to make the abstract mathematical concepts more accessible to a broader audience, particularly those in the machine learning community without a strong background in functional analysis.\\n\\nWeakness3.\\nThe presentation of some key definitions and theorems is rather dense, making it difficult for readers to follow the logical flow. Providing intuitive explanations alongside formal proofs would help bridge the gap for less mathematically inclined readers.\", \"answer\": \"To begin with, let us explain the motivation behind our study. For the hypothesis space of one-layer neural networks (the integral RKBS class), [1] proved a Representer Theorem in Theorem 3.9. This theorem guarantees the existence of a solution by showing that our target function can be expressed as a finite sum of functions when solving optimization problems in one-layer neural networks using empirical risk minimization (ERM). However, the existence of such a solution does not guarantee the existence of a specific algorithm to find it. Therefore, it is likely that the optimization methods we actually use for neural networks do not find the solution guaranteed by the Representer Theorem. (In fact, we believe that additional dummy parameters considered in practice lead to an increase in the generalization error compared to the solution guaranteed by the Representer Theorem.) In the case of kernel methods using RKHS (e.g., kernel ridge regression), the theoretical existence of a solution and the algorithm to find it are clear. We aimed to find a similar methodological approach for neural networks, where both the theoretical existence of solutions and the practical optimization algorithms can be identified, which motivated us to explore this decomposition. However, there are still limitations in conducting numerical analyses at this stage. We will elaborate on this further in the responses to Question 1 and Question 2 below.\\n\\n[1] Francesca Bartolucci, Ernesto De Vito, Lorenzo Rosasco, and Stefano Vigogna. Understanding neural networks with reproducing kernel Banach spaces. Applied and Computational Harmonic Analysis, 62:194\\u2013236, 2023\"}", "{\"comment\": \"Dear reviewer Ubdx,\\n\\nOnce again, we sincerely appreciate your valuable feedback. We have added further elaboration regarding Weakness1 in Section 3.1, and provided additional explanation related to Weakness2 in the Related Work section. However, regarding the practical aspects, we are unable to provide a mathematically perfect description at this point, so we intend to substitute the explanation mentioned above. Additionally, we have addressed the minor comments you raised and uploaded the revised version. Thank you once again for giving us this valuable opportunity.\"}", "{\"comment\": \"We sincerely appreciate your valuable response.\\n\\nWe also strongly agree with your perspective and are hopeful about removing the compactness assumption. Likewise, we positively view the direction of extending the parameter space \\n$\\\\Omega \\\\subset_{cpt} \\\\mathbb{R}^{D}$ to $\\\\mathbb{R}^{D}$. We apologize for not fully explaining our reasoning in response to Question 1, and we would like to make a few additional comments on this matter.\\nAs mentioned above, please understand that the compactness assumption was primarily introduced for simplicity in our arguments.\\n\\n1. Regarding the parts of our paper where the compactness assumption is necessary (Proposition 3.5):\\n\\nIn fact, Proposition 3.5 is somewhat tangential to the main context of our paper. This theorem was included to aid understanding of the class of integral RKBS, as its structure diverges from the intuition we typically derive from the Universal Approximation Theorem.\\nIn the proof of Proposition 3.5, we applied Exercise 8.10.134 in reference [1], which implicitly relies on the fact that any measure on a compact space is tight. Therefore, it is unclear whether the same argument can be applied in the general case of $\\\\mathbb{R}^{D}$.\\n(Additionally, we would like to suggest considering the interesting result in [2], which shows that no RKHS contains the space of continuous functions defined on a compact metric space.)\\n\\n2. Regarding a purely mathematical perspective:\\n\\nIf we aim to extend the parameter space \\n$\\\\Omega$ beyond a subset of $\\\\mathbb{R}^{D}$ to a general topological space $(\\\\Omega,\\\\tau)$ (not restricted to Euclidean spaces), compactness certainly presents some advantages. Specifically, any Borel measure defined on a compact metric space is a Radon measure. In our arguments, we indeed had to handle the space of Radon measures, as we relied on the Riesz Representation Theorem. However, by assuming compact metric space, we were able to work with the more tractable space of Borel measures instead. Of course, as you pointed out, when the parameter space is extended to all of $\\\\mathbb{R}^{D}$, the space remains complete and separable, so the same argument can still apply (see Theorem 7.1.7 in [1]). \\n\\n\\nWe greatly appreciate your insightful feedback. We also believe that a significant portion of our research can be developed on $\\\\mathbb{R}^{D}$. Furthermore, we are grateful for the abundant references you provided regarding NNGP (Neural Network Gaussian Process). In particular, we have learned a great deal about the statistical perspective from the reference [3] you provided. Once again, thank you very much.\", \"references\": \"[1] Bogachev, Vladimir Igorevich, and Maria Aparecida Soares Ruas. Measure theory. Vol. 2. Berlin: springer, (2007).\\n\\n\\n[2] Steinwart, Ingo. \\\"Reproducing kernel Hilbert spaces cannot contain all continuous functions on a compact metric space.\\\" Archiv der Mathematik 122.5 (2024): 553-557.\\n\\n\\n[3]Radford M Neal. Priors for infinite networks. In Bayesian Learning for Neural Networks, Lecture Notes in Statistics, pp. 29\\u201353. Springer New York, New York, NY, 1996. ISBN 0387947248.\"}", "{\"comment\": \"We sincerely appreciate your thoughtful review and valuable discussion.\"}", "{\"comment\": \"Dear Reviewer UkFb,\\n\\nYour valuable feedback has been incredibly helpful to us. In response to Weakness1 you mentioned, we have added further elaboration on the direction of our research in relation to existing studies. Regarding Question1 and Question3, we have provided additional explanations of the relevant definitions in the main text of our revised version.. Once again, we sincerely appreciate your thoughtful review.\"}", "{\"comment\": \"Question1.\\nWhat is $\\\\mathcal{S}$ in the diagram in Figure 1?\", \"answer\": \"There can be various concepts of summation for spaces, but in our paper, we consistently describe the summation notation we wish to use. Therefore, the notation $\\\\sum\\\\_{i\\\\in [n]}^{2}$ used in Section 5 refers to RKHS, which corresponds to the sum of the RKBS defined in lines 324-329. The reason we use $p=2$ is that the sum of the RKBS derived from the 2-norm direct sum $\\\\bigoplus\\\\_{i\\\\in [n]}^{2}$ results in an RKHS. In other words, $\\\\sum\\\\_{i\\\\in [n]}^{2}\\\\mathcal{L}\\\\_{\\\\sigma}^{2}(\\\\mu\\\\_{i})$ is always a Hilbert space, whereas $\\\\sum\\\\_{i\\\\in [n]}^{1}\\\\mathcal{L}\\\\_{\\\\sigma}^{2}(\\\\mu\\\\_{i})$ is generally not a Hilbert space.\\n\\n\\nThank you for taking the time to read our work. Your review has been immensely helpful to us.\"}", "{\"comment\": \"Thank you for the response. The connection between the proposed decomposition and the multiple kernel methods is interesting. I have a question about this point. In practical cases, we use a finite number of kernels for multiple kernel methods, but neural networks are also represented by a finite number of weight parameters and the finite sum instead of the integral. Does this mean if we adopt a certain discretization of the integral and obtain a practical neural network, then it is equivalent to a kernel machine with certain multiple kernels?\"}", "{\"comment\": \"Question1.\\nCould the authors provide a concrete example of how the decomposition of an RKBS improves the understanding or efficiency of neural network analysis? An illustrative example or a simple simulation would greatly help clarify the practical benefits.\", \"answer\": \"In Question 1, we explained the intuition behind our theory. However, we currently do not have a way to implement and prove this experimentally. As mentioned in Weakness 2, this is because we do not have an algorithm to optimize the integral RKBS $\\\\mathcal{F}\\\\_{\\\\sigma}(\\\\mathcal{X},\\\\Omega)$ in a way that guarantees the existence of a solution. Since we do know an algorithm (which find a solution guaranteed by the Representer Theorem) to compute $\\\\sum\\\\_{i\\\\in [n]}^{2}\\\\mathcal{L}\\\\_{\\\\sigma}^{2}(\\\\mu\\\\_{i})$, how about comparing the results using the existing method for optimizing one-layer neural networks under square loss? However, this approach would also struggle to prove space size comparisons in practice, and even if the experimental results for one-layer neural networks are good, they would not be reliable due to the reasons mentioned earlier. Instead, our approach suggests future research possibilities for practical directions. The candidate approach is to progressively add $\\\\mathcal{L}\\\\_{\\\\sigma}^{p}(\\\\mu\\\\_{i})$ terms in a bottom-up manner, getting as close as possible to the integral RKBS. In summary, we have conducted research to reduce the gap between theory and practical experiments, and our goal is to find concrete (numerically feasible) algorithms that are guaranteed by the theory. This is the direction we aim to pursue.\\n\\n[1] Ali Rahimi and Benjamin Recht. \\u201cRandom features for large-scale kernel machines\\u201d. In: Advances in neural information processing systems 20 (2007).\\n\\n[2] Francis Bach. \\u201cBreaking the curse of dimensionality with convex neural networks\\u201d. In: The Journal of Machine Learning Research 18.1 (2017), pp. 629\\u2013681.\"}", "{\"summary\": \"Reproducing kernel Hilbert spaces (RKHS) can be decomposed into a sum of RKHS. A natural generalization is to consider reproducing kernel _Banach_ spaces (RKBS) and to decompose it as a sum of RKBSs. Defining the sum is non-trivial, and the authors take on this task. Furthermore, given an RKBS with an integral the authors provide a way to decompose it into a sum of RKBSs. The authors claim that this stablishes a connection to neural networks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Banach spaces have a much wider range of options than Hilbert spaces, which seems like a good motivation to consider this type of spaces.\\n2. The authors hint to novel ideas connecting integral RKBS to the study of neural networks (however, see Weakness 1).\\n3. The authors are very comprehensive to a reader \\u2013 such as myself, who has not seen several of the relevant concepts since a real analysis course.\", \"weaknesses\": \"1. The authors claim that there is a connection with neural networks, but do not make it clear nor precise. For example, the only mention of neural networks are in the introduction and a single mention in Subsection 3.3, without going into detail of the correspondence between the terms developed in the paper and neural networks.\\n2. Immediately after Proposition 3.7 the authors mention the feature map ($s$) and the RKBS ($\\\\mathcal{S}$), without an explicit definition in the main text. As the definitions are available in the appendix, I think it would strengthen the paper to include them in the main text. \\n3. Presumably there is a correspondence between the triples $(\\\\Psi,\\\\psi,A)$ that the authors see with neural network concepts, but with the current status of the paper is quite hard to understand. Can the authors make this relationship explicit?\", \"questions\": \"### Questions:\\n1. How important is the compactness of $\\\\Omega$ for the main results? By the work of Neal (1996) we know that for _decent_ densities (e.g., finite moments and bounded activation functions) we have a kernel similar to the kernel of $L^2$ stated in line 299, even when $\\\\Omega$ is unbounded. \\n2. It is not immediately clear to me why $\\\\mathrm{Im}(A)$ has finite dimension as indicated in line 284. Perhaps I am missing something. Is there a specific reference or lemma that makes it clear?\\n3. The comment on lines 414-417, about the Tietze extension, seems out of place. Should it be in the proof of Proposition 5.2?\\n\\n### Citation issues:\\n- Citations are not consistent throughout, which makes it harder on the reader to digest this technical paper. A good guide is in https://guides.library.unr.edu/apacitation/in-textcite. An example of this is the third line of the first section, which includes the name of the authors twice.\\n- Proper names are typically capitalized, even in the references. e.g. 'Banach' in Bartolucci et al (2023).\\n\\n### Minor notation issues:\\n- The authors use $\\\\langle \\\\cdot ,\\\\cdot \\\\rangle$ and $<\\\\cdot ,\\\\cdot>$ interchangeably for inner products. It would be good to standardize notation, or clarify what is the main difference between these two notations. For example, in page 6, line 291 uses $<\\\\cdot ,\\\\cdot>$ while line 320 uses $\\\\langle \\\\cdot ,\\\\cdot \\\\rangle$ , with no clear difference between them. Perhaps one of the two notations corresponds to a semi-inner product, but at the moment this distinction is not at all clear.\\n- I could not find a definition of $\\\\mu \\\\perp\\\\nu$ in the text, used in line 161. I assume it means something like $\\\\int_{\\\\Omega} \\\\mu(\\\\omega)d\\\\nu(\\\\omega)=0$, or something in that sense. This could be easily added.\\n\\n### Minor grammatical mistakes:\\n- Line 350, 'an another' should just be 'another'\\n- There is a repetition of \\\"equation\\\" in Remark 3.8.\\n\\n### References\\n\\nFrancesca Bartolucci, Ernesto De Vito, Lorenzo Rosasco, and Stefano Vigogna. Understanding neural networks with reproducing kernel Banach spaces. Applied and Computational Harmonic Analysis, 62:194\\u2013236, 2023\\n\\nRadford M Neal. Priors for infinite networks. In Bayesian Learning for Neural Networks, Lecture Notes in Statistics, pp. 29\\u201353. Springer New York, New York, NY, 1996. ISBN 0387947248.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Weakness3.\\nAround 5 of the 8 pages are about the definitions or restating results in previous literature. It would be great if this work could spend some space on (1) the potential benefits of their results, (2) takeaway messages about RKBS, (3) technical difficulties encountered and solved, and (4) novel mathematical tools and techniques that are of independent interest. It is otherwise unclear what would be the central contribution of this work.\", \"answer\": \"We would like to intuitively explain the contributions of our research. I would like to emphasize that all the content in our propositions and theorems that does not reference existing literature is new theory that we have developed. The specific details are as follows.\\n\\nRegarding Proposition 3.5:\\nWe have shown that the bounded operator $A:\\\\mathcal{M}(\\\\Omega)\\\\rightarrow C(\\\\mathcal{X})$ is a compact operator. From this fact, we deduce that the hypothesis space of one-layer neural networks, the integral RKBS, is strictly smaller than the space of continuous functions $C(\\\\mathcal{X})$. To the best of our knowledge, this result has not been previously established in the literature. This implies that while we know from the Universal Approximation Theorem that one-layer neural networks can approximate any continuous function, the hypothesis space of one-layer neural networks cannot cover the entire space of continuous functions. In other words, while the target function we aim to find through the neural network can approximate all continuous functions, an arbitrary continuous function cannot be the target function we are working with.\\n\\nRegarding Proposition 3.7: The sum of RKHSs (as established in classical results by Aronszajn [1]) has generally only been justified for finite sums. To address this, we used the characterization theorem from [2]. The infinite sum we propose can only be justified by the feature map $\\\\mathbf{s}$ and RKBS map $\\\\mathcal{S}$ defined using equation (2.1) line 132-133, and to the best of our knowledge, there has been no explicit definition of this in previous studies. This may raise the question of why we need to introduce an infinite sum. Since the hypothesis space we aim to work with in machine learning is an infinite vector space, we cannot handle all elements using concepts like the Hamel basis, and thus, the introduction of concepts like the Schauder basis becomes necessary. In a similar context, we required the infinite sum of RKBSs.\\n\\nRegarding Proposition 4.2: This theorem shows that for an index set $I$ with arbitrary cardinality, the direct sum structure of the feature space is compatible with the sum structure of the RKBS defined in Proposition 3.7. While the result is easily understandable for finite sums, there had been no known results for infinite sums. Since this is stated quite generally, we believe it can be extended to treat other Banach spaces, such as Sobolev spaces, or metrizable locally convex spaces like Fr\\u00e9chet spaces, as feature spaces. Additionally, we believe this theorem strengthens the philosophy that understanding the feature space allows for a better understanding of the hypothesis space.\\n\\nRegarding Theorem 4.4:\\nWe have successfully decomposed the integral RKBS, which serves as the hypothesis space for neural networks, while maintaining the RKBS structure, using Proposition 3.7 and Proposition 4.2. As we mentioned in Remark 4.5, this shows that by using spaces such as $\\\\mathcal{L}\\\\_{\\\\sigma}^{1}(\\\\mu\\\\_{i})$ (which are more tractable due to their separability), we can better understand the hypothesis space of neural networks. Furthermore, because we preserved the RKBS structure in the decomposition, we expect that if we can find a kernel learning algorithm for $\\\\mathcal{L}\\\\_{\\\\sigma}^{1}(\\\\mu)$, we could derive a multiple kernel learning algorithm to approximate the approximation power of neural networks.\\n\\n\\n[1] Nachman Aronszajn. \\u201cTheory of reproducing kernels\\u201d. In: Transactions of the American mathematical society 68.3 (1950), pp. 337\\u2013404.\\n\\n[2] Patrick L Combettes, Saverio Salzo, and Silvia Villa. \\u201cRegularized learning schemes in feature Banach spaces\\u201d. In: Analysis and Applications 16.01 (2018), pp. 1\\u201354.\"}", "{\"comment\": \"First of all, we sincerely thank you again for your kind responses and questions.\\nWe speculate that when considering a model fixed with a finite number of $m$ neurons, the Representer Theorem itself may not provide much useful information. As you know, the intuitive implication of the Representer Theorem is that we only need to consider a finite number of neurons (or a finite sum of kernel functions) instead of an infinite number. Nevertheless, if the number of neurons $m$ is overwhelmingly larger than the number of data points $n$ $(m>>n)$, it seems possible to make meaningful observations.\\n\\nIf the question is not about a model with a fixed \\n$m$ neurons but instead about applying the Representer Theorem to the entire integral RKBS \\n$\\\\mathcal{F}\\\\_{\\\\sigma}(\\\\mathcal{X},\\\\Omega)$ and its relationship to our decomposition, then this is closely related to the problem we mentioned as a topic of interest in our future work. We believe that since the Representer Theorem determines the properties of the hypothesis space for solving a machine learning problem with a given finite dataset, exploring whether these properties allow us to select a meaningful finite index set from our decomposition $\\\\mathcal{F}\\\\_{\\\\sigma}(\\\\mathcal{X},\\\\Omega)=\\\\sum\\\\_{i\\\\in I}\\\\mathcal{L}\\\\_{\\\\sigma}(\\\\mu\\\\_{i})$ would be an interesting question.\\n\\nHowever, we hold a slightly negative view regarding this approach. This is because, even if the space is constrained in this way, it would likely become highly data-dependent. The bottom-up approach that we have consistently mentioned can be seen as an alternative concept to the method described above. In this regard, there is some remarkable research. Specifically, if one defines an integral RKBS based on the ridgelet transform form $g(<x,\\\\hat{w}>-b)$\\nand uses a sufficiently good activation function, it has been shown that there exists an RKHS containing the integral RKBS ([1]). We believe this provides additional potential for understanding integral RKBS from a top-down perspective.\\n\\nIt seems this is related to the question you mentioned in Weakness 3. We apologize for not addressing this point when responding to Weakness 3. Discussions with you have been incredibly helpful and delightful for us. Please feel free to ask further questions if you have any, and we will do our best to answer them to the best of our knowledge.\\n\\n\\n[1] Sch\\u00f6lpple, M. and Steinwart, I. (2023). Which Spaces can be Embedded in Reproducing Kernel Hilbert Spaces?. arXiv preprint arXiv:2312.14711.\"}", "{\"comment\": \"Question1.\\nIn definition 3.4, $\\\\sigma$ can be any element in $C(\\\\mathcal{X}\\\\times\\\\Omega)$. Does this mean we can deal with deep neural networks by properly setting $\\\\sigma$? In that case, I think this framework is more flexible than other methods like ridgelet transform [1] (in the framework of ridgelet transform, we have to consider the form $\\\\sigma(\\\\left<w,x\\\\right>-b)$). Can we apply this framework of RKBS to show the universality of deep neural networks?\", \"answer\": \"4.Relationship with deep neural networks: To be honest, we are not yet certain whether the flexible notation used in our integral RKBS can be applied to the theoretical analysis of deep neural networks. As mentioned in point 1, we do not rule out such a possibility. However, whether we consider the hypothesis space or universality of deep neural networks, the fundamental philosophy should involve examining models in the context of infinite width. Additionally, since discrete functions also need to be taken into account, a more detailed approach would likely be required. Thus, at the current stage, we believe it is not feasible. Instead, there are a few well-known studies that aim to model the hypothesis space of deep neural networks. Among them, we would like to introduce the following paper, which models deep neural networks using Deep Integral RKBS [2].\\n\\n[1] S. Sonoda and N. Murata, \\\"Neural network with unbounded activation functions is universal approximator\\\", Applied and Computational Harmonic Analysis, 43(2): 233-268.\\n\\n[2] Francesca Bartolucci et al. \\u201cNeural reproducing kernel Banach spaces and representer theorems for deep networks\\u201d. In: arXiv preprint arXiv:2403.08750 (2024).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This work studies the reproducing kernel Banach spaces (RKBSs). Specifically, it shows that the integral RKBSs can be decomposed into a sum of a set of RKBSs each defined based on a different measure. It then presents an application of the decomposition result that the RKHSs are contained in the RKBSs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"This work gives a thorough presentation of the related concepts to the sum of the RKBSs proposed here.\", \"This work shows a nice property that the sum of the feature spaces is compatible with the sum of the RKBSs.\"], \"weaknesses\": [\"The related work section is not informative. In particular, Section 1.1 does not introduce what are the advantages and, importantly to this paper, the limitations of RKHS, and it does not address the previous literature on RKBS nor what questions the literature has solved with RKBS. Also, it does not provide any motivation for the results presented in this work. It is mainly just a list of abbreviated references.\", \"This work asks the questions to address at the end of page 1 that it aims to decompose the integral RKBS into more fundamental blocks. But it does not touch on the motivation behind and what results one can get with this decomposition.\", \"Around 5 of the 8 pages are about the definitions or restating results in previous literature. It would be great if this work could spend some space on (1) the potential benefits of their results, (2) takeaway messages about RKBS, (3) technical difficulties encountered and solved, and (4) novel mathematical tools and techniques that are of independent interest. It is otherwise unclear what would be the central contribution of this work.\"], \"questions\": [\"Please see weaknesses above.\", \"What is $\\\\mathcal{S}$ in the diagram in Figure 1?\", \"What are some potential applications of the results presented in this work? What are some specific examples of machine learning tasks or theoretical problems where the RKBS decomposition might provide advantages over RKHS approaches?\", \"What does $2$ mean in $\\\\sum_{i\\\\in [n]}^2$ in Section 5?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors represent neural networks using an integral RKBS, where the feature space is the measure space corresponding to the distribution of the weight of the final layer. They characterize the decomposition of RKBSs via the decompsition of the feature spaces, and show the integral RKBS representing neural networks is deomposed into the sum of a family of p-norm RKBSs, each of which is characterized by a probability measure.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Applying the theory of RKBSs to analyzing neural networks and reducing the problems to those in the feature spaces of the RKBSs is interesting approach. The result is solid and the mathematical notions are carefully introduced. I think this paper provides a direction of future studies of the theory of neural networks.\", \"weaknesses\": [\"Although this paper is well-organized and the mathematical notions are clear, for readers in the machine learning community, I think more explanations that shows the connection between the theoretical approaches and results and the pratical neural networks.\", \"In my understanding, the decomposition is by virtue of the decomposition of the measure space (feature space), and that is why RKBSs are useful in the analysis of neural networks. I think the reason why RKBSs are useful should be clearly explained in the main text.\", \"The motivation of the decomposition should be explained from the perspective of neural networks. I thought that since the pratical neural networks are represented by the sum involving the weight, instead of the integral involving the distribution of the weight, the decomposition of the integral RKBS into the sum of the family of smaller RKBSs makes the representation more practical. Does this interpretation correct? I think the advavtage of the decomposition should be discussed from the perspective of the application to the analysis of neural networks.\", \"Maybe related to the above point, but can we construct the family $\\\\{\\\\mu_i\\\\}$ in Theorem 4.4 explicitly? I think understanding $\\\\{\\\\mu_i\\\\}$ is important for the analysis of the weights of neural networks. Do you have any examples or any comments on this point?\", \"If the motivation related to neural networks becomes clearer, I will consider raising my score.\"], \"questions\": \"Questions:\\n- In definition 3.4, $\\\\sigma$ can be any element in $C(\\\\mathcal{X}\\\\times\\\\Omega)$. Does this mean we can deal with *deep* neural networks by properly setting $\\\\sigma$? In that case, I think this framework is more flexible than other methods like ridgelet transform [1] (in the framework of ridgelet transform, we have to consider the form $\\\\sigma(\\\\langle w,x\\\\rangle-b)$). Can we apply this framework of RKBS to show the universality of deep neural networks?\", \"minor_comments\": [\"p5, line 276, \\\"when $\\\\mathcal{X}$ be ...\\\" should be \\\"when $\\\\mathcal{X}$ is ...\\\" ?\", \"In Remark 3.8, \\\"equation equation 3.2\\\" should be \\\"equation 3.2\\\".\"], \"references\": \"[1] S. Sonoda and N. Murata, \\\"Neural network with unbounded activation functions is universal approximator\\\", Applied and Computational Harmonic Analysis, 43(2): 233-268.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your explanation. I agree with your argument.\"}", "{\"comment\": \"Dear Reviewer mrzL,\\n\\nThank you very much for your valuable feedback. In response to Weakness1 you raised, we have added an explanation regarding the relationship with one-layer neural networks in the revised Section 3.3. Additionally, as per your suggestion, we have further elaborated on Weakness2 in Proposition 3.7 and have uploaded the updated version. We have also addressed the citation and minor notation issues you pointed out.\\n\\nWe truly appreciate the time and effort you took in reviewing our paper, and your feedback has been incredibly helpful in improving our work. Once again, thank you.\"}", "{\"comment\": \"Question1.\\nIn definition 3.4, $\\\\sigma$ can be any element in $C(\\\\mathcal{X}\\\\times\\\\Omega)$. Does this mean we can deal with deep neural networks by properly setting $\\\\sigma$? In that case, I think this framework is more flexible than other methods like ridgelet transform [1] (in the framework of ridgelet transform, we have to consider the form $\\\\sigma(\\\\left<w,x\\\\right>-b)$). Can we apply this framework of RKBS to show the universality of deep neural networks?\", \"answer\": \"That is truly an insightful question! You've raised a great point regarding the notation we used. Additionally, the references you cited are not only excellent in content but also serve as a great example that highlights the core of the question. To clarify, let me explain in a few steps.\\n\\n1.Regarding the flexibility of $\\\\sigma$: As you pointed out, defining $\\\\sigma(x, w) = g(\\\\left<x, \\\\hat{w}\\\\right> - b)$ in our setting precisely models a one-layer neural network. In the original paper [2] where integral RKBS was first defined, the method you mentioned was used exactly as you described. However, there are several reasons for using a more flexible notation like the one we adopted.\\nFirst, it was used to clearly establish that $\\\\sigma(x, \\\\cdot) \\\\in C(\\\\Omega)$ for all $x \\\\in \\\\mathcal{X}$.\\nSecond, despite this flexibility, it did not hinder the proof of our results in any way.\\nThird, as you mentioned, we have not excluded the possibility that deep neural networks themselves can be represented by integral RKBS, and this will be further elaborated later.\\nFinally, let me explain the advantages of using such a highly flexible representation. Our Proposition 5.2 shows that the hypothesis space formed by summing RKHSs, which are constructed by setting input-layer parameters from different activations and distributions, can be embedded into the integral RKBS with the activation function we have constructed. This is likely a result that cannot be obtained using a more limited notation such as $g(\\\\left<x, \\\\hat{w}\\\\right> - b)$. We believe this result indirectly demonstrates how increasing the dimensionality of parameters, such as the bias term, can significantly enhance the expressive power of the model.\\n\\n2. Regarding the referenced paper [1]: While it may not be directly related to your question, I would like to provide additional explanation for clarity. Although I have not had time to read it in detail, I will do my best to explain based on what I know. First, the notation in question seems to refer to equation (2) in the paper: \\n$$\\\\int_{\\\\mathbb{Y}^{m+1}}T(\\\\mathbf{a},b)\\\\eta(\\\\mathbf{a}\\\\cdot \\\\mathbf{x}-b)d\\\\mu(\\\\mathbf{a},b)$$\\nwhich is similar to the form expressed by the elements of $\\\\mathcal{L}\\\\_{\\\\sigma}^{p}(\\\\mu)$\\nin our submission (though the notion of metric between elements differs since ours is formulated as an RKBS).\\nThe cited paper demonstrates the \\n$L^{2}$-sense universality of shallow neural networks defined on an unbounded parameter space, $\\\\mathbb{Y}^{m+1} = \\\\mathbb{R}^{m+1}$(Theorem 5.11). To efficiently handle unbounded activations defined on an unbounded parameter space, the authors employed elegant concepts from distribution theory in their proof. However, universal approximation itself is somewhat different from our objective. Our focus lies in analyzing the hypothesis space of neural networks. By working with a compact parameter space $\\\\Omega$, we avoided challenges associated with unbounded activations.\\n\\n3.Regarding universality: As you may know, the universality of neural networks can be proven in various ways depending on the objective. While there are multiple classification schemes, I would like to focus on approaches that explicitly utilize the hypothesis space of neural networks. Specifically, Theorem 3.8 of [3] provides an example of this methodology. One advantage of such an approach, compared to traditional studies on universality, is that it allows us to directly control the approximation quality by using the norm of the hypothesis space, which is the same concept of distance used during actual machine learning training. The paper we referenced, focuses on Barron spaces. However, Theorem 2.3 of the same paper([3]) shows that when using the ReLU activation function, the Barron space and the integral RKBS are equivalent. Therefore, we believe the same results can be applied to the integral RKBS class.\\n\\n[1] S. Sonoda and N. Murata, \\\"Neural network with unbounded activation functions is universal approximator\\\", Applied and Computational Harmonic Analysis, 43(2): 233-268.\\n\\n[2] Francesca Bartolucci et al. \\u201cUnderstanding neural networks with reproducing kernel Banach spaces\\u201d. In: Applied and Computational Harmonic Analysis 62 (2023), pp. 194\\u2013236.\\n\\n[3]E Weinan and Stephan Wojtowytsch. \\u201cRepresentation formulas and pointwise properties for Barron functions\\u201d. In: Calculus of Variations and Partial Differential Equations 61.2 (2022), p. 46.\"}", "{\"summary\": \"The authors define the sum of RKBSs using a characterization theorem, investigate its compatibility with the direct sum of feature spaces, and decompose the integral RKBS $ F_\\\\sigma(X, \\\\Omega) $ into the sum of $p$-norm RKBSs $\\\\{L^1_\\\\sigma(\\\\mu_i)\\\\}_{i \\\\in I}$. This study enhances the structural understanding of the integral RKBS class, offering theoretical insights that can help analyze the performance of neural networks by decomposing complex function spaces into simpler, manageable components.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper introduces an innovative framework for decomposing integral RKBSs, offering a novel interpretation of one-layer neural networks. This approach is unique in its use of Banach spaces and their decomposition to analyze function spaces, advancing the existing understanding of RKBSs.\\n\\nThe decomposition of RKBSs has significant implications for the analysis of neural networks, especially in designing kernel-based learning algorithms. The compatibility between the sum of RKBSs and the direct sum of feature spaces represents a meaningful advancement in understanding how integral RKBSs can be decomposed, which could potentially impact practical applications in machine learning, such as multiple kernel learning.\", \"weaknesses\": \"The paper could benefit from more illustrative examples to make the abstract mathematical concepts more accessible to a broader audience, particularly those in the machine learning community without a strong background in functional analysis.\\n\\nThe experimental results are limited, and the practical implications of the theoretical findings are not fully demonstrated through empirical evaluation. Including numerical examples or simulations to show the decomposition's effects on real-world neural network performance would significantly improve the paper's practical relevance.\\n\\nThe presentation of some key definitions and theorems is rather dense, making it difficult for readers to follow the logical flow. Providing intuitive explanations alongside formal proofs would help bridge the gap for less mathematically inclined readers.\", \"questions\": \"1. Could the authors provide a concrete example of how the decomposition of an RKBS improves the understanding or efficiency of neural network analysis? An illustrative example or a simple simulation would greatly help clarify the practical benefits.\\n\\n2. Would the authors consider adding a numerical evaluation to demonstrate the theoretical claims empirically? This would help bridge the gap between the abstract mathematical results and their practical implications in machine learning.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive evaluation of our research results. Your insightful questions are closely related to the future research directions we have in mind. We appreciate the opportunity to elaborate on this. Additionally, Question 1 was a profound question that we had not anticipated. We will do our best to provide thorough and sincere answers to all of these points. I would like to express my gratitude for taking the time to review.\\n\\nWeakness1.\\nIn my understanding, the decomposition is by virtue of the decomposition of the measure space (feature space), and that is why RKBSs are useful in the analysis of neural networks. I think the reason why RKBSs are useful should be clearly explained in the main text.\", \"answer\": \"Yes, that\\u2019s correct. Your interpretation aligns with one of our motivations for decomposing the integral RKBS.\\nTo begin with, in order to ensure the existence of a solution to the problem we aim to solve through machine learning, the Representer theorem is essential. One paper that successfully proves the Representer theorem for neural networks is Theorem 3.9. in [1]. However, it is important to note that guaranteeing the existence of a solution and obtaining a concrete algorithm are entirely different matters. To be more specific, there is no direct method to find the extremal points in Theorem 3.9 above. Therefore, it is likely that the optimization algorithms currently used with neural networks do not guarantee the existence of a solution as stated in the Representer theorem. In an effort to address this issue, we decomposed the hypothesis space of neural networks, the integral RKBS $\\\\mathcal{F}\\\\_{\\\\sigma}(\\\\mathcal{X},\\\\Omega)$, into more tractable components, such as $\\\\mathcal{L}\\\\_{\\\\sigma}^{1}(\\\\mu\\\\_{i})$, while preserving the RKBS structure. In our approach, the $\\\\mathcal{L}\\\\_{\\\\sigma}^{1}(\\\\mu\\\\_{i})$ spaces are separable, and by the Theorem 2.4. in [2], we can also ensure the existence of the kernel. We thought that if the mathematical formulation was not fully refined, it could lead to misunderstandings by the readers. Additionally, since we were not confident enough, we decided not to mention our expectations in the manuscript. We believe that the comment on Weakness 3 could help us explain our perspective on the applications more intuitively, so we will continue to elaborate on this point in the Weakness 3.\\n\\n[1] Francesca Bartolucci et al. \\u201cUnderstanding neural networks with reproducing kernel Banach spaces\\u201d. In: Applied and Computational Harmonic Analysis 62 (2023), pp. 194\\u2013236.\\n\\n[2] Rong Rong Lin, Hai Zhang Zhang, and Jun Zhang. \\u201cOn reproducing kernel Banach spaces: Generic definitions and unified framework of constructions\\u201d. In: Acta Mathematica Sinica, English Series 38.8 (2022), pp. 1459\\u20131483.\"}", "{\"comment\": \"Weakness3.\\nMaybe related to the above point, but can we construct the family $\\\\mu\\\\_{i}$ in Theorem 4.4 explicitly? I think understanding $\\\\mu\\\\_{i}$ is important for the analysis of the weights of neural networks. Do you have any examples or any comments on this point?\", \"answer\": \"This is an important question from an application perspective. We also believe that understanding $\\\\mu\\\\_{i}$, which corresponds to the weights of neural networks, is crucial for our analysis. However, the family of $\\\\\\\\{\\\\mu\\\\_{i}\\\\\\\\}\\\\_{i\\\\in I}$ we use in Theorem 4.4 are constructed via Zorn's Lemma as described in lines 161-162, so they cannot be explicitly computed. In this regard, we have introduced Proposition 5.1, Proposition 5.2, and Remark 5.3 as part of a discussion on future research directions for applications. Let me explain Proposition 5.1. As you know, $\\\\mathcal{L}\\\\_{\\\\sigma}^{2}(\\\\pi)$ is a model for one-layer neural networks with infinite width, where the input-layer parameters are fixed by the distribution $\\\\pi$, and only the layer-output parameters are learned. This model is formulated in the framework of RKHS (Please see [1], [2]). We consider an arbitrary finite singular probability measure $\\\\\\\\{\\\\mu\\\\_{i}\\\\\\\\}\\\\_{i \\\\in [n]}$ defined on the input-layer parameter space $\\\\Omega$, and (intuitively, if we decompose the parameter space $\\\\Omega$ into a finite number of domains), we showed that the sum space $\\\\sum\\\\_{i\\\\in[n]}^{2}\\\\mathcal{L}\\\\_{\\\\sigma}^{2}(\\\\mu\\\\_{i})$ of the RKHSs derived from this measure family is embedded into the integral RKBS $\\\\mathcal{F}\\\\_{\\\\sigma}(\\\\mathcal{X},\\\\Omega)$, which serves as the hypothesis space for one-layer neural networks. This implies that the multiple kernel methods for the models discussed earlier perform worse in approximation power than the one-layer neural networks. Furthermore, our theory extends beyond the case where $p \\\\neq 2$ and includes infinite (countably infinite singular measure families). The diagram below illustrates this situation.\\n$$\\n\\\\\\\\{\\\\text{ family of singular probability measures } \\\\mu\\\\_{i} \\\\text{ on } \\\\Omega\\\\\\\\} \\\\rightarrow \\\\sum\\\\_{i\\\\in [n]}^{p}\\\\mathcal{L}\\\\_{\\\\sigma}^{p}(\\\\mu\\\\_{i}) \\\\overset{\\\\text{distance}}{\\\\hookrightarrow} \\\\mathcal{F}\\\\_{\\\\sigma}(\\\\mathcal{X},\\\\Omega) = \\\\sum\\\\_{i\\\\in I}^{1}\\\\mathcal{L}\\\\_{\\\\sigma}^{1}(\\\\mu\\\\_{i})\\n$$\\nAs mentioned earlier in Weakness 2, the current method for optimizing neural networks is not one that guarantees the existence of solutions based on the Representer theorem. We consider a bottom-up approach to asymptotically approximate the integral RKBS $\\\\mathcal{F}\\\\_{\\\\sigma}(\\\\mathcal{X},\\\\Omega)$ as our future research direction, and we aim to use this approach to find specific algorithms that ensure the existence of solutions as guaranteed by the Representer theorem.\\n\\n\\n[1] Francis Bach. \\u201cBreaking the curse of dimensionality with convex neural networks\\u201d. In: The Journal of Machine Learning Research 18.1 (2017), pp. 629\\u2013681.\\n\\n[2] Ali Rahimi and Benjamin Recht. \\u201cRandom features for large-scale kernel machines\\u201d. In: Advances in neural information processing systems 20 (2007).\"}" ] }
CFLEIeX7iK
Neural Solver Selection for Combinatorial Optimization
[ "Chengrui Gao", "Haopu Shang", "Ke Xue", "Chao Qian" ]
Machine learning has increasingly been employed to solve NP-hard combinatorial optimization problems, resulting in the emergence of neural solvers that demonstrate remarkable performance, even with minimal domain-specific knowledge. To date, the community has created numerous open-source neural solvers with distinct motivations and inductive biases. While considerable efforts are devoted to designing powerful single solvers, our findings reveal that existing solvers typically demonstrate complementary performance across different problem instances. This suggests that significant improvements could be achieved through effective coordination of neural solvers at the instance level. In this work, we propose the first general framework to coordinate the neural solvers, which involves feature extraction, selection model, and selection strategy, aiming to allocate each instance to the most suitable solvers. To instantiate, we collect several typical neural solvers with state-of-the-art performance as alternatives, and explore various methods for each component of the framework. We evaluated our framework on two extensively studied combinatorial optimization problems, Traveling Salesman Problem (TSP) and Capacitated Vehicle Routing Problem (CVRP). Experimental results show that the proposed framework can effectively distribute instances and the resulting composite solver can achieve significantly better performance (e.g., reduce the optimality gap by 0.88\% on TSPLIB and 0.71\% on CVRPLIB) than the best individual neural solver with little extra time cost.
[ "learn to optimize" ]
Reject
https://openreview.net/pdf?id=CFLEIeX7iK
https://openreview.net/forum?id=CFLEIeX7iK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xBQXLYYH5M", "wHv2LFKJTz", "w9htbAH8m2", "viSqmLyzOc", "szA43A9bYn", "rcM408ZDQz", "kRcy5NhMGQ", "VwSuI8P8Od", "Sx1pkXOoIh", "SK4PPDTCK9", "L1d4lc809a", "KYclmo2YQS", "K02zU7F7bK", "FPjSOz1RQH", "FGe9HQXR7A", "F1j6JD6AmF", "Ddb9C4hA6w", "CyyTzdcYZc", "Cg3ME9QcO7", "8cdhXE4WMH" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732270416466, 1734747292938, 1732537095581, 1732269459864, 1733312196885, 1732269562513, 1730612318230, 1732270590456, 1737524024734, 1730607657012, 1732715138788, 1730656471690, 1732271018789, 1732270672005, 1732269584234, 1729787031152, 1732270920960, 1733223927249, 1732674337863, 1732270870899 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Submission10081/Area_Chair_U1hn" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Submission10081/Reviewer_kQip" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10081/Reviewer_PbuG" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Submission10081/Reviewer_N15X" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Submission10081/Reviewer_SaPT" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ], [ "ICLR.cc/2025/Conference/Submission10081/Reviewer_kQip" ], [ "ICLR.cc/2025/Conference/Submission10081/Reviewer_PbuG" ], [ "ICLR.cc/2025/Conference/Submission10081/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for reviewing our paper. We sincerely appreciate your valuable comments, which are very helpful in refining our work. However, there may exist some misunderstanding about the position we expected of this paper. We have carefully revised our paper according to your comments and try to clarify our motivation and contribution. Here are the detailed responses.\\n\\n**Response to \\u201cConnection to Neural Combinatorial Optimization and Novelty\\u201d**\\n\\nThank you for your valuable questions. We first want to emphasize that our main contribution is introducing algorithm selection to the community of Neural Combinatorial Optimization (NCO) for the first time, and showing its effectiveness even with a very straightforward implementation. Unlike traditional methods, NCO methods leverage neural networks to build data-driven solvers, obtaining good optimality gaps with significantly superior inference efficiency. However, inspired by the No-Free-Lunch theorem, we investigated the instance-level performance of prevailing NCO solvers and found that they demonstrate clear complementarity. This phenomenon emphasizes the potential of combining the advantages of state-of-the-art neural solvers and motivates our proposal of adaptively selecting suitable solvers for each instance. Since our work is supposed to be a pioneer attempt at neural solver selection, our main goal is to verify the possibility and benefits of solver selection for NCO. In our experiments, we found that even a straightforward method using hand-crafted features and classification models can outperform the state-of-the-art neural solver, which strongly indicates that solver selection is a promising direction for NCO. We believe our work can benefit the NCO community and inspire future research in this area. \\n\\nOn the other hand, the implementation of our neural solver selection framework has some advanced components. For example, we propose a new method of extracting instance features for NCO, which is different from previous works on the classical algorithm selection for TSP. Firstly, we verified the manual features proposed in classical algorithm selection for TSP [1] and found that they can only achieve limited performance (in Table 3 of the original paper). To address this, we proposed a novel pooling-based hierarchical encoder designed to extract richer instance features, leading to significantly better generalization performance. We believe that such an instance feature extraction method may also be helpful for improving other NCO methods, not limited to our neural solver selection framework.\\n\\nThank you again for your thoughtful comments. We sincerely hope the above clarification has made the main contribution of this work clear.\\n\\n**Response to \\u201cComparison with existing selection methods\\u201d**\\n\\nThank you for your valuable question. As we mentioned above, the primary contribution of our work lies in pioneering the integration of model selection into NCO. Thus, we mainly gave a straightforward implementation of the selection framework, and demonstrated its superiority against the best single individual solver through extensive experiments. This has achieved the goal of this work. But we agree that it is meaningful to add the discussion and comparison with existing selection methods from other areas.\\n\\nThanks to your suggestion, we have revised to discuss the difference with selection methods of non-neural solvers for TSP, and conduct detailed comparison experiments (as shown in the following table). Most algorithm selection methods follow a pipeline with two steps: Feature extraction and selection model training. Below, we summarize how our approach improves upon these steps:\\n1. For the feature extraction step, existing works for TSP [1,2,4] typically rely on hand-crafted features derived from cluster analysis, nearest-neighbor graphs, and other techniques. In contrast, we proposed a hierarchical graph encoder to learn feature representations in a data-driven manner. Both the original experiments (Table 3) and the newly added results demonstrate that our neural encoder significantly outperforms hand-crafted features, particularly in terms of generalization ability on unseen datasets.\\n2. For the selection model step, to our knowledge, most works [1,2,4] for TSP utilize traditional classification models like random forests or support vector machines, while we use a neural network trained by a learn-to-rank loss. We believe our neural selection model is a better choice so we didn't compare it with traditional models in our original experiments. Thanks to your suggestion, we include ablation of the neural selection model in the newly added experiments, which show that the neural selection model is better than the random forest.\\n\\n(Limited by space, the following contents are in the next block)\"}", "{\"metareview\": \"This paper proposed a learning based solver selection method for neural vehicle routing models. It involves feature extraction, selection model, and selection strategy, aiming to allocate each instance to the most suitable solver from a pool of neural VRP models. Reviewers agreed that the proposed method is interesting and neural solver selection is of practical meaning. However, they also raised several key concerns including 1) insufficient discussion and comparison to existing algorithm selection methods; 2) insufficient link to NCO; and 3) insufficient technical novelty. I particularly agree with the first point. Algorithm selection is a classic topic with many mature methods that can easily be applied to select neural VRP solvers, but this paper lacks a systematic discussion of the literature, as well as proper comparison to SOTA methods. In addition, the link to NCO is indeed not strong, as it appears to be a neural algorithm selection method that can be applied to other algorithms. So overall, this paper is interesting, but still requires major improvement to reach the acceptance threshold.\", \"additional_comments_on_reviewer_discussion\": \"Authors provided detailed responses with additional results. However, the key concerns mentioned above still remain. Though two reviewers increased their score, the overall evaluation is still borderline.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThank you for dedicating your time and effort to reviewing our paper. In response to your valuable comments and questions, we have made significant efforts to provide additional discussions and experimental results. As the ICLR public discussion phase will be ending in less than 2 days, we would like to kindly remind you and ask if our responses could address your concerns. Any further questions and comments are also welcomed! \\n\\n\\n\\nBest Regards,\\nAuthors\"}", "{\"comment\": \"Thank you for your positive review. Here are our detailed responses to your comments and questions, which we hope will address your concerns.\\n\\n**Response to your comment: The key idea behind this paper is similar to the paper: Zero Training Overhead Portfolios for Learning to Solve Combinatorial Problems (ZTop).**\\n\\nThanks for pointing out this related work. The ZTop method uses a fixed set of neural solvers to construct a portfolio for all instances, similar to other ensemble and population-based methods [1,2], which we discussed in the paper. Instead of employing a static portfolio of solvers, our method provides a more flexible solution by adaptively creating instance-specific portfolios, i.e., adaptively selecting the most suitable solvers for each instance. In fact, our experiments in Figure 3 have demonstrated that our proposed top-k strategy consistently outperforms the static portfolios (similar to ZTop) across different k values. We have revised to add some discussion about the relationship between our method and ZTop. Thank you.\\n\\n**Question 1: Results on large-scale instances like TSP-10000**\\n\\nThanks for your suggestion. The generalization performance of neural solvers on large-scale instances still remains to be an important challenge in the community. As our work focuses on exploring the benefits of instance-level neural solver selection, our experiments mainly follow common settings with scales under 1000, which is friendly to the prevailing methods. Thanks to your suggestion, we additionally examine larger-scale cases with larger-scale instances and newly added divide-and-conquer solvers [3,4]. However, solving TSP-10000 is too hard and memory-consuming for the most prevailing methods, so we increase the scale to $N=2000$ for the evaluation of large-scale performance. The results show that our selection framework can generalize to larger-scale instances where $N\\\\ge1000$. We have revised to add these results (i.e., Table 12) in the new version. We hope this can address your concerns. \\n\\n| Methods \\\\ Metrics | Optimality gap on TSP500-2000 | Time on TSP500-2000 |\\n| --- | --- | --- |\\n| Single best solver | 6.104% | 8.369s |\\n| Ours (Greedy) | 5.540% (0.038%) | 8.322s (0.036s) |\\n| Ours (Top-k, k=2) | **5.369% (0.003%)** | 15.566s (0.085s) |\\n| Single best of new solver pool | 3.562% | 5.274s |\\n| Ours with new solvers (Greedy) | 3.126% (0.002%) | 6.892s (0.006s) |\\n| Ours with new solvers (Top-k, k=2) | **2.955% (0.005%)** | 13.713s (0.036s) |\\n\\n**Question 2: Are the instances training the selection model generated from the same distribution of the testing instances?** \\n\\nIn the experiments, we evaluate our proposed method under two test settings: 1. In-Distribution: The instances for test are sampled from the same synthetic distribution with instances for training. 2. Out-of-Distribution: Instances sampled from synthetic distribution are used for training, and the popular problem library TSPLIB and CVRPLIB are used for test. For more details, please refer to Section 4.1 of our paper. \\n\\nThank you again for your valuable comments. We sincerely hope our response can answer your questions, and any further questions and discussions are very welcome.\", \"references\": \"[1] Ensemble-based deep reinforcement learning for vehicle routing problems under distribution shift. In Advances in Neural Information Processing Systems 36 (NeurIPS).\\n\\n[2] Winner takes it all: Training performant RL populations for combinatorial optimization. In Advances in Neural Information Processing Systems 36 (NeurIPS).\\n\\n[3] GLOP: Learning global partition and local construction for solving large-scale routing problems in real-time. In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI).\\n\\n[4] UDC: A unified neural divide-and-conquer framework for large-scale combinatorial optimization problems. In Advances in Neural Information Processing Systems 37 (NeurIPS).\"}", "{\"title\": \"Thanks for your feedback\", \"comment\": \"Thank you very much for your feedback. We are very pleased to hear that our reply has addressed many of your concerns.\\n\\nThe aim of this work is introducing the idea of neural solver selection to the NCO community for the first time, and showing its effectiveness with a straightforward implementation. Thus, we did not focus on the design of the components in our proposed selection framework. We fully agree that designing better components is interesting, e.g., designing better feature representation of neural solvers, exploring runtime-aware selection methods for neural solvers with different search budgets, and enhancing the solver pool by training, as we discussed in Section 5. We believe our work can open a new line for NCO, and inspire more follow-up works on neural solver selection.\\n\\nThank you again for dedicating your time and effort to review our paper.\"}", "{\"comment\": \"Thank you for your valuable comments. We sincerely appreciate your agreement on the effectiveness of the instance-specific solver selection proposed in our paper, which, we believe, has the potential to be a new branch of techniques for the application of NCO solvers. Meanwhile, we are very sorry for the possible unclear description, which may lead to confusion. We have carefully taken your reviews into account and revised our paper. Here are our detailed responses to your comments and questions, which we hope will address your concerns.\\n\\n**Response to weakness 1: Lack of discussion on different selection strategies**\\n\\nThank you for your comments. We have revised our paper to clarify the advantages of each selection strategy. According to the mechanisms of the four selection strategies, they have different preferences in the trade-off of efficiency and optimality. Generally, for efficiency, Greedy > Rejection \\u2248 Top-p > Top-k; for optimality, Top-k > Rejection \\u2248 Top-p > Greedy. Meanwhile, their hyper-parameters can be used for balancing efficiency and optimality as well. As a result, the choice of selection strategies can be decided by the users according to their preference, and we suggest using Top-p or Rejection as the default choice since they can adaptively select solvers based on the confidence of the selection model.\\n\\n**Response to Question 1: Detailed interpretation of the hierarchical encoder**\\n\\nFeature extraction plays an important role in selecting solvers for each instance. To obtain better instance features, we propose the hierarchical encoder which combines multi-level features together. In TSP and CVRP, certain nodes, such as those in clusters or geometric patterns, are particularly representative and informative for describing instance properties. By focusing on these nodes, we can effectively capture the spatial distribution of the entire instance. Our proposed hierarchical encoder is designed to identify these key nodes and create downsampled graphs, allowing it to concentrate on representative subgraphs and learn more robust features. The detailed processes are described in Section 3.1 of our main paper.\\n\\nThe overall performance of the hierarchical encoder, as shown in Table 3 of the main paper, demonstrates its superiority, especially across scales and distribution scenarios (TSPLib and CVPRLib). Thanks to your suggestion, we have revised to add Figure 6 to illustrate the retained nodes after downsampling. We can find some consistent patterns which are intuitively reasonable. We summarize them as three main points:\\n\\n1. Cluster nodes. As illustrated in Figures 6(a) and 6(b), when instances contain certain clusters, the hierarchical encoder tends to select a subset of \\u201crepresentative\\u201d nodes from each cluster, efficiently describing the entire spatial distribution.\\n2. Specific blocks. As illustrated in Figures 6(c) and 6(d), when instances contain specific complex geometric patterns like squares (Figure 6(c)) and arrays (Figure 6(d)), the hierarchical encoder can capture the nodes of these important areas to identify its characteristics.\\n3. Boundary nodes. For instances without clear sub-components, the hierarchical encoder tends to focus on boundary nodes that describe the global shape, as illustrated in Figures 6(e) and 6(f).\\n\\nFor your convenience, we also put the figures at [https://anonymous.4open.science/r/pics_for_analyzing_encoder-CA27/illustration_of_nodes.pdf](https://anonymous.4open.science/r/pics_for_analyzing_encoder-CA27/illustration_of_nodes.pdf).\\n\\nThank you very much for your valuable suggestion, which really has improved our work.\\n\\n**Response to Questions 2 & 3 about neural solver features**\\n\\nThanks for your thoughtful questions. When solver-specific features are absent, we associate neural solvers with the index of the MLP output. For instance, the first dimension of the MLP output corresponds to the score for the first neural solver. Extensive experiments in our paper have demonstrated that this simple approach has achieved good generalization across instances. \\n\\nUnder our proposed selection framework, constructing features of neural solvers and utilizing them for selection is very promising but challenging. In this paper, we made a preliminary exploration for integrating solver-specific features into the learning process, detailed in Section 5 and Appendix A.9. Our preliminary method involves learning a summary feature from representative instances of each neural solver. This method facilitates generalization to new neural solvers, allowing us to add them to the solver pool without fine-tuning the selection model. However, we did not observe an improvement in generalization over instances using this method. This suggests that further research is required to develop more sophisticated neural solver features, which could enhance model capacity and generalization performance. We will further study it in our future work.\"}", "{\"summary\": \"This paper proposes a novel framework for selecting the most suitable neural solver for different instances of combinatorial optimization problems (COPs). The framework effectively combines graph feature extraction through attention mechanisms and hierarchical encoders, as well as multiple solver selection strategies, including Greedy, Top-k, and Rejection-based approaches. The experimental results demonstrate the superiority of the proposed method over traditional approaches on tasks like TSP and CVRP. Overall, the paper offers valuable insights to instance-specific solver selection.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces a novel combination of hierarchical graph encoders and multiple selection strategies, which together enhance solver performance for different combinatorial optimization problem instances.\\n2. The experimental results cover a range of combinatorial optimization tasks and demonstrate improvements in performance compared to using a single solver.\\n3. The proposed Adaptive Solver Selection Framework for selecting solvers based on instance characteristics is flexible.\", \"weaknesses\": \"1. The current selection strategies include Greedy selection, Top-k, Rejection-based, and Top-p. While these strategies have demonstrated effectiveness in different experimental settings, the basis for choosing the most suitable strategy for different types of instances is not clear. For example, what kind of instances would make Top-k more suitable than Top-p?\\n2. The paper could benefit from including more graphical representations.\", \"questions\": \"1. The paper mentions that the hierarchical encoder can better leverage the hierarchical structural features in COPs. However, the intuitive interpretation of these hierarchical features is unclear. How do these structures correspond to specific instance properties of problems such as TSP or CVRP?\\n2. The paper mentions the use of graph features and instance scale as inputs for the selection model, while the specific features of different neural solvers are not directly involved in the learning process. Would the absence of these solver-specific features limit the generalization ability of the selection model?\\n3. During the score calculation phase, how exactly does the MLP relate to different solvers? In other words, how are the features of different solvers reflected in the MLP, and how does this ensure that the classification results are correlated with the solvers' features?\\n4. In the Top-p selection strategy, the paper defines a threshold probability $p$ to decide which solvers to retain. Is this threshold set adaptively based on the problem's features, allowing for optimal performance? \\n5. The introduction of a hierarchical encoder adds complexity to the model. How does this impact the overall training efficiency and inference speed of the model? Is there any quantitative analysis showing the trade-off between the hierarchical encoder's added complexity and the model's performance improvements?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"(Following the block above...)\\n\\nMoreover, we also proposed adaptive selection strategies considering the confidence of the selection model, which goes beyond what traditional methods offer. Together, our neural encoder, learn-to-rank selection model, and adaptive strategies form a practical framework for neural solver selection, as validated by both the original and newly added experiments.\\n\\nFor details of the newly added experiments, we introduce them as follows. To demonstrate the effectiveness of our proposed techniques, **we provide additional comparisons between our proposed method and existing algorithm selection methods for non-neural TSP solvers [1,2]**. In fact, the method of using features from [1] and our ranking model was also compared in Table 3 of the original paper. The R package *salesperson*[3] provides the up-to-now most comprehensive collection of features for TSP and is widely used in algorithm selection methods [2,4]. Based on the feature set of *salesperson*, we reproduce an advanced algorithm selection method [2] following the pipeline that computes hand-crafted features, conducts feature selection, and applies random forest for classification, where we employ univariate statistical tests to select important features. Besides, we also combine the *salesperson* features with our ranking model for ablation.\\n\\n| Methods \\\\ Metrics | Optimality gap on synthetic TSP | Time on synthetic TSP | Optimality gap on TSPLIB | Time on TSPLIB |\\n| --- | --- | --- | --- | --- |\\n| Single best solver | 2.33% | 1.45s | 1.95% | 1.74s |\\n| Features from [1] + Ranking | 1.97% (0.01%) | 1.37s (0.01s) | 1.83% (0.03%) | 1.32s (0.05s) |\\n| Method of [2] | 2.12% (0.04%) | 1.35s (0.00s) | 1.56% (0.01%) | 1.34s (0.05s) |\\n| Features from *salesperson* + Ranking | 1.95% (0.01%) | 1.33s (0.03s) | 1.55% (0.03%) | 1.27s (0.06s) |\\n| Ours + Greedy | 1.86% (0.01%) | 1.33s (0.01s) | 1.33% (0.06%) | 1.28s (0.03s) |\\n| Ours + Top-p (p=0.5) | **1.68% (0.02%)** | 1.86s (0.07s) | 1.28% (0.04%) | 1.46s (0.06s) |\\n| Ours + Rejection (20%) | 1.75% (0.02%) | 1.63s (0.01s) | **1.26% (0.03%)** | 1.51s (0.04s) |\\n\\nThe experimental results in the above table show that our proposed method can achieve superior performance than advanced algorithm selection methods on both synthetic TSP and TSPLIB. Comparing the fifth and sixth rows, our proposed hierarchical encoder demonstrates superior performance over the *salesperson* features, especially on the out-of-distribution benchmark TSPLIB. Additionally, the comparison of the fourth and fifth rows shows that our deep learning-based ranking model achieves better results than traditional classification methods. Furthermore, the results of the last three rows illustrate that our proposed adaptive selection strategies effectively enhance optimality with minimal increases in time consumption.\\n\\nIn summary, though our goal is to introduce model selection into the area of NCO and show its effectivenss, the proposed implementaion of the selection framework for this purpose also has some technical novelty over the existing algorithm selection methods from other areas. We hope our discussion, along with the newly added experiments, addresses your concerns. Thank you again for your thoughtful feedback.\\n\\n**Response to \\u201cNovelty of the key components\\u201d**\\n\\nThank you for your valuable question. As we emphasized before, our goal is to introduce model selection into the area of NCO and show its effectivenss. Thus, we focus on providing a practical implementation of the selection framework, instead of developing entirely new components. But the proposed implementaion of the selection framework also has some technical novelty, as we introduced before. Here, we give more detailed introduction.\\n\\n1. **Pooling-Based Hierarchical Graph Encoder**: The proposed hierarchical encoder employs a graph pooling operation to downsample representative subgraphs, constructing hierarchical representations that are robust to distributional shifts, which is not quite common in the NCO community. This encoder outperforms the standard graph encoder and hand-crafted features, especially on out-of-distribution datasets. The corresponding results can be found in Table 3, and we also provide additional results (in the above table) by comparing our encoder with more hand-crafted features [1,2] according to your suggestions. \\n2. **Adaptive Selection Strategies**: We propose new adaptive strategies like top-p and rejection-based selection, which allow the model to adaptively choose one or multiple solvers based on its confidence, effectively balancing optimality and efficiency. The corresponding results can be found in Tables 1 and 2. To our knowledge, existing selection methods usually focus on the top-1 or top-k selection and have not explored such adaptive strategies.\\n\\n(Limited by space, the following contents are in the next block)\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This work proposes a neural solver selection framework to efficiently select a subset of suitable neural combinatorial optimization (NCO) solvers to handle each problem instance at the inference time. Three key components (feature extraction, training loss, and selection strategies) have been proposed and investigated in detail for solver selection. Experimental results show that the proposed framework can achieve promising performance on the traveling salesman problem (TSP) and the capacitated vehicle routing problem (CVRP) with little extra time cost.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper is well written and easy to follow.\", \"Algorithm selection is an important strategy for classic (combinatorial) optimization, and it has not yet been well studied for NCO. This work is a timely contribution to this important research direction.\", \"The proposed algorithm selection framework can achieve promising performance on different TSP and CVRP instances.\"], \"weaknesses\": \"**1. Connection to Neural Combinatorial Optimization (NCO) and Novelty**\\n\\nAlthough this work's main motivation is to propose a neural solver selection framework for NCO, it seems that the proposed solver selection approach is actually agnostic to NCO. It is more like an independent learning-based solver selection method that can be used for other solvers, including the traditional ones. What makes the proposed method specific for NCO?\\n\\nOn the other hand, algorithm selection is already a popular research direction in the optimization community. As correctly mentioned in this paper, many (learning-based) algorithm/solver selection methods have already been proposed and widely used in practice (for example, see [1] for TSP algorithm selection). Many of them can be easily adapted to select NCO solvers. What is the novelty/contribution of the proposed framework over the existing algorithm selection methods?\\n\\n[1] https://tspalgsel.github.io/\\n\\n**2. Discussion/Comparison with Existing Algorithm Selection Methods**\\n\\nI think the claim \\\"[the traditional method] has never been explored in the area of neural combinatorial optimization\\\" is far from enough to truly distinguish the proposed method from the traditional algorithm selection method. A detailed discussion/comparison with traditional algorithm selection methods is needed. \\n\\nWhat are the advantages/disadvantages of the proposed method compared with existing (learning-based and non-learning-based) algorithm selection methods? What is the performance of existing algorithm selection methods with NCO solvers? What is the performance of the proposed method with classic solvers?\\n\\n**3. Novelty of the Key Components**\\n\\nThe proposed framework has three key components, namely feature extraction, selection model, and selection strategies. However, it seems that these components and the proposed structures are quite common in the NCO and algorithm selection community. The novelty and unique contribution of these components should be highlighted with solid evidence. \\n\\n**4. Generalization Performance**\\n\\nAlthough experimental results show the proposed framework has good performance with problem distribution/scale shifts, it is unclear why it can achieve good out-of-distribution generalization performance as a learning-based method. \\n\\n**5. Experiments**\\n\\nIn the experiments, only a single summary table is provided for each comparison. I think a complete table with separate results for different instances (e.g., with different numbers of nodes), as widely used in other NCO papers, could be very helpful to better understand the performance of the proposed method.\\n\\nAs mentioned above, a detailed comparison with existing (learning-based and non-learning-based) algorithm selection methods is also needed.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your feedback\", \"comment\": \"Thank you very much for your kind response. We are very pleased to hear that our reply has addressed many of your concerns. Yes, we truly hope this work can open a door for solver selection of NCO, and inspire more follow-up works on this research direction. Thank you once again for your time and valuable insights in reviewing our paper.\"}", "{\"summary\": \"This paper considers a new perspective on solving the Combinatorial Optimization (CO) problem using deep learning. Given an instance, a deep learning framework is trained to select the best suitable solver for this instance from a state-of-the-art solver pool. The general idea is (1) feature extraction of the input instance, (2) selection based on several criteria, e.g., top k, and the output of a trained classifier/ranking model. (3) run the instance on the select solver(s).\\n\\nThe experiment results show that the proposed method can improve the current single solver performance with few efforts. It also shows the ability to generalize. The key idea behind this paper is similar to this paper: Bai Y, Zhao W, Gomes C P. Zero Training Overhead Portfolios for Learning to Solve Combinatorial Problems[J]. arXiv preprint arXiv:2102.03002, 2021. Since the CO problems are typically too hard, so a single sovler cannot capture the entire problem structure. So, different solvers have their own advantages, then we can leverage this to improve the performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) Since the CO problems are typically too hard, a single solver cannot capture the entire problem structure. Different solvers have their own advantages, which we can leverage to improve performance.\\n(2) The experiment results show the ability to generalize.\", \"weaknesses\": \"See questions.\", \"questions\": \"(1) Do you have any results on TSP-10000 or large instances? Trying to train your selection model on TSP-1000 and see how it can be generalized to TSP-10000 is critical.\\n(2) Are the instances training the selection model generated from the same distribution of the testing instances?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We are very grateful to the reviewers for carefully reviewing our paper and providing constructive comments and suggestions. We have revised the paper carefully according to the comments and suggestions. The changed and newly added parts are colored in blue in the new version. Our response to individual reviewers can be found in the personal replies, but we would also like to make a brief summary of revisions for your convenience.\\n\\n1. According to the suggestion of Reviewer N15x and SaPT, we conduct new experiments on larger-scale instances (N up to 2000) with newly added divide-and-conquer neural solvers, detailed in Appendix A.12. The additional results demonstrate that our proposed method can be compatible with more neural solvers and can also improve performance on larger-scale instances.\\n2. According to the suggestion of Reviewer PbuG, we conduct comparison experiments with advanced algorithm selection methods for non-neural TSP sovlers, showing the superiority of our implementation. Details can be found in Appendix A.13.\\n3. According to the suggestion of Reviewer kQip, we provide graphical illustrations of how the hierarchical encoder works in Appendix A.15, where we find some intuitively reasonable down-sampled patterns.\\n4. We provide more detailed results of our method.\\n\\n (1). According to the suggestion of Reviewer PbuG, we present separate results with different problem scales in Appendix A.11, demonstrating that our method consistently outperforms individual neural solvers across different problem scales.\\n\\n (2). According to the suggestion of Reviewer kQip, we add comparisons of computational costs between our hierarchical encoder and a typical graph encoder in Appendix A.6. These results further validate the efficiency and superiority of our hierarchical encoder.\\n\\n (3). According to the suggestion of Reviewer SaPT, we now include the results of objective values in Tables 8 and 9 for more detailed comparisons. \\n5. We add some discussions according to the reviewers' comments.\\n\\n (1). As suggested by Reviewer kQip, we have expanded the discussion on the advantages of our proposed selection strategies in Appendix A.7, highlighting their impact on optimality and efficiency;\\n\\n (2). In Appendix A.14, we have added explanations for our dataset choices as suggested by Reviewer SaPT, clarifying how the dataset aligns with the scope and goals of our work; \\n\\n (3). In Section 5, we have included a discussion on the versatility of our framework, as also recommended by Reviewer SaPT, emphasizing its potential applicability to different combinatorial optimization problems;\\n\\n (4). Following Reviewer N15X's suggestion, we discuss a new related work in Appendix A.5 to provide a more comprehensive context for our contributions.\\n\\n\\n**We hope that our response has addressed your concerns, but if we missed anything please let us know.**\"}", "{\"comment\": \"(Following the block above...)\\n\\n3. **Neural Solver Feature**: We explore the usage of the neural solver feature in Section 5, and its experimental results can be found in Appendix A.9. Specifically, we propose to use a Transformer to learn a summary feature from representative instances of each neural solver. This enables generalization to unseen solvers, allowing us to add new solvers to the pool without fine-tuning the selection model. To our knowledge, the generalization ability over solvers is also new for existing algorithm/model selection methods. \\n\\nWe believe that many future efforts on selection techniques (e.g., designing better feature representation of neural solvers, exploring runtime-aware selection methods for neural solvers with different search budgets, and enhancing the solver pool by training) can be made under our framework, as discussed in Section 5. We believe our work can benefit the NCO community and inspire future research in this area.\", \"references\": \"[4] Learning the travelling salesperson problem requires rethinking generalization. In Constraints.\\n\\n[5] Towards omni-generalizable neural methods for vehicle routing problems. In ICML.\\n\\n[6] INViT: A generalizable routing problem solver with invariant nested view transformer. In ICML.\\n\\n[7] Towards generalizable neural solvers for vehicle routing problems via ensemble with transferrable local policy. In IJCAI.\\n\\n**Response to: Comparison under different scales of instances**\\n\\nThank you for your thoughtful comments. As you suggested, we have revised to provide separate results on our datasets for a deeper investigation. The following tables demonstrate that our selection method consistently outperforms the single best solver across different problem scales on both TSP and CVRP datasets. We hope this addresses your concerns.\\n\\nSeparate results according to problem scale $N$ on the synthetic TSP dataset. We report the mean (standard deviation) optimality gap over five independent runs. \\n\\n| Methods | $50\\\\le N \\\\le 200$ | $200< N \\\\le 300$ | $300< N \\\\le 400$ | $400< N \\\\le 500$ |\\n| --- | --- | --- | --- | --- |\\n| Single best solver | 0.96% | 2.34% | 2.78% | 2.98% |\\n| Oracle | 0.39% | 1.19% | 1.70% | 2.18% |\\n| Ours (Greedy) | 0.84% (0.03%) | 2.01% (0.02%) | 2.43% (0.02%) | 2.71% (0.03%) |\\n| Ours (Top-k, k=2) | 0.61% (0.02%) | 1.53% (0.03%) | 1.99% (0.03%) | 2.41% (0.05%) |\\n| Ours (Rejection, 20%) | 0.75% (0.04%) | 1.86% (0.04%) | 2.33% (0.03%) | 2.62% (0.02%) |\\n| Ours (Top-p, p=0.5) | 0.71% (0.02%) | 1.70% (0.02%) | 2.24% (0.04%) | 2.57% (0.04%) |\\n\\nSeparate results according to problem scale $N$ on the synthetic CVRP dataset. \\n\\n| Methods | $50\\\\le N \\\\le 200$ | $200< N \\\\le 300$ | $300< N \\\\le 400$ | $400< N \\\\le 500$ |\\n| --- | --- | --- | --- | --- |\\n| Single best solver | 3.95% | 6.06% | 7.76% | 9.24% |\\n| Oracle | 2.17% | 4.33% | 5.74% | 7.40% |\\n| Ours (Greedy) | 2.85% (0.03%) | 4.87% (0.02%) | 6.47% (0.05%) | 8.09% (0.01%) |\\n| Ours (Top-k, k=2) | 2.32% (0.02%) | 4.54% (0.02%) | 5.91% (0.03%) | 7.55% (0.03%) |\\n| Ours (Rejection, 20%) | 2.64% (0.02%) | 4.70% (0.03%) | 6.22% (0.02%) | 7.91% (0.03%) |\\n| Ours (Top-p, p=0.8) | 2.36% (0.02%) | 4.70% (0.04%) | 6.21% (0.05%) | 7.81% (0.02%) |\"}", "{\"comment\": \"**Response to Question 4: The threshold hyperparameter in Top-p selection strategy**\\n\\nThank you for your insightful question. Currently, the threshold parameter $p$ is not adaptively set for each instance but fixed (e.g., 0.5 on TSP) for all the instances. In Figure 4 of our paper, we plotted the results of using $p$ values ranging from 0.95 to 0.40 in decrements of 0.01. These results demonstrate a trade-off: As $p$ increases, the optimal gap improves, but the average time required also increases. This highlights $p$ as a hyperparameter that allows users to balance efficiency and optimality. Though using a fixed value has led to good performance in our experiments, we believe that adaptively adjusting $p$ for each instance could further improve performance as you suggested. We have revised to add this as an interesting direction for future research.\\n\\n**Response to Question 5: Efficiency of the proposed hierarchical encoder**\\n\\nThanks for your question. The introduction of our hierarchical encoder brings very limited computation costs. To address your concerns, we provide detailed comparisons of the computation cost and optimality, between our hierarchical encoder and a typical graph encoder. The results are shown in the following table, which includes the inference time per instance on TSPLIB, training time per epoch, and the average optimality gap on TSPLIB.\\n\\n| Methods | Inference time on TSPLIB of selection model | Inference time on TSPLIB of neural solvers | Training time each epoch | Optimality gap on TSPLIB |\\n| --- | --- | --- | --- | --- |\\n| Naive graph encoder | 0.0054s | 1.2600s | 1m40s | 1.54% |\\n| Hierarchical graph encoder | 0.0070s | 1.2961s | 2m30s | 1.37% |\\n\\nWe can observe from the second column that the introduction of our hierarchical encoder will increase the inference time of the selection model a little bit, e.g., from 0.0054s to 0.0070s. However, as shown in the second and third columns, the inference time of the selection model is orders of magnitude shorter than that of the neural solvers, so the inference efficiency of the selection model is less of a concern. The fourth column shows that the training time per epoch of the na\\u00efve encoder and the hierarchical encoder are 1m40s and 2m30s, respectively. Although the hierarchical encoder slows the training, the total runtime for 50 epochs is still only 2 hours, which is acceptable in most scenarios. Therefore, the performance metric (i.e., optimality gap) of different encoders is more crucial, especially the generalization performance. If the encoder learns robust representations, we can directly transfer the selection model to different datasets in a zero-shot manner, saving the time for fine-tuning and adaptation. Considering the better generalization (e.g., the optimality gap decreases from 1.54% to 1.37%), we believe that the proposed hierarchical encoder is a better choice. \\n\\nThanks to your suggestion, we have revised to add the results (i.e., Table 6) in the new version. We hope this explanation can address your concerns. Thank you again.\"}", "{\"summary\": \"This paper introduces a framework for selecting the most appropriate neural solvers for TSP and CVRP at instances level. The framework enhances performance by allocating each problem instance to the most suitable solvers from a pool of available neural solvers via graph encoding and tailored selection model and strategy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is generally well-structured and easy to follow. The writing is clear and the presentation of the idea is concise. The idea of selecting neural solvers at instance level is interesting and of practical significance.\", \"weaknesses\": \"1. The novelty of this work is somewhat limited because the idea of ranking different existing solvers for individual instances is not technically innovative, and the framework seems to highly rely on previous solvers, graph encoders, as well as established losses and selection strategies.\\n2. To gain the supervision for training requires executing multiple solvers on the same training set, which is probably time-consuming. Furthermore, given such computational overhead, it is believed that a tediously sequential performing of them on the targeted dataset can have been already done for simple selection of the optimal result. Thus, further clarification is needed on the necessity of this proposal.\\n3. The OPT in the evaluation is somewhat misleading. I suggest the authors solving the test instances with exact solvers or powerful heuristics like Gurobi, LKH3, HGS, etc, as reference solution for the computation of optimality gaps, which also better aligns with previous works. \\n4. Additionally, adding such heuristics (in point 3) in your selection zoo is worth considering for further experimental results. If the neural solvers achieve comparable performance as the learning-free methods, the significance of this work is further strengthened.\\n5. More mainstream solvers should be included, such as [1-7]. They are a set of representing (but not limited to) neural works for routing problem solving, including supervised-, reinforced-, unsupervised-, meta-reinforced-, divide-and-conquer-, and neural-heuristic-mannered approaches. It is acceptable the authors include a subset of them into the framework, but this would benefit the completeness for your empirical evaluation.\\n6. The authors are also suggested to evaluate their framework on the conventionally used uniform TSP dataset (like those consistent test files through [1,2,4,5,7, etc.]). And please report the origianal objective for the COPs in addition to currently only the gap.\\n7. The claim in the title is broader than what is done within the paper. If the framework is to be a neural solver selection of combinatorial optimization, can it be readily applied to more complex problems beyond TSP and CVRP? And what is the solution at larger-scaled (e.g., $N\\\\ge 1000$) instances where most neural solvers struggle to produce satisfactory results compared to the traditional heuristics?\\n\\n**References:**\\n\\n[1] DIMES: A Differentiable Meta Solver for Combinatorial Optimization Problems.\\n\\n[2] An Efficient Graph Convolutional Network Technique for the Travelling Salesman Problem.\\n\\n[3] Unsupervised Learning for Solving the Travelling Salesman Problem.\\n\\n[4] Graph Neural Network Guided Local Search for the Travelling Salesperson Problem.\\n\\n[5] Generalize a Small Pre-trained Model to Arbitrarily Large TSP Instances.\\n\\n[6] GLOP: Learning Global Partition and Local Construction for Solving Large-scale Routing Problems in Real-time.\\n\\n[7] Attention, Learn to Solve Routing Problems!\", \"questions\": \"Please see the weaknesses part for questions and suggestions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Weakness5 & 7-2 New neural solver & larger-scaled experiments**\\n\\nThank you for your constructive suggestions. We have added several new solvers to our pool as recommended. Then, we increased the problem scale from N \\u2208 [0,500] to N \\u2208 [500, 2000] and used the enhanced solver pool to conduct new experiments. The results, shown in the following table, demonstrate that our framework is compatible with more neural solvers and can also improve performance over the single best solver on larger-scale instances. We hope these additional results can address your concerns. \\n\\nFor details, we add two divide-and-conquer solvers, GLOP and UDC [8], to our solver pool, which can significantly enhance the overall performance. We do not include other neural solvers since they either contribute little to the overall performance [1,2,7], which would be filtered out by our elimination process detailed in Appendix A.3, or rely on post-search techniques (e.g., Monte-Carlo tree search) [3, 5] that consume much more time than other greedy decoding methods in the solver pool, causing some fairness issues. The construction of our solver pool now considers reinforced (ELG, INViT), supervised (BQ, LEHD), meta-learning-based (Omni), diffusion-based (DIFUSCO, T2T), and divide-and-conquer (GLOP, UDC) methods. The experimental results have shown that our proposed framework can effectively combine the advantages of these neural solvers and significantly improve performance. We have revised to include these new results (i.e., Table 12) in the new version.\\n\\n| Methods \\\\ Metrics | Optimality gap on TSP500-2000 | Time on TSP500-2000 |\\n| --- | --- | --- |\\n| Single best solver | 6.104% | 8.369s |\\n| Ours (Greedy) | 5.540% (0.038%) | 8.322s (0.036s) |\\n| Ours (Top-k, k=2) | **5.369% (0.003%)** | 15.566s (0.085s) |\\n| Single best of new solver pool | 3.562% | 5.274s |\\n| Ours with new solvers (Greedy) | 3.126% (0.002%) | 6.892s (0.006s) |\\n| Ours with new solvers (Top-k, k=2) | **2.955% (0.005%)** | 13.713s (0.036s) |\\n\\n**Weakness6: Evaluation on uniform TSP dataset**\\n\\nThanks for your thoughtful suggestions. Yes, uniform datasets are commonly used, and many neural solvers have already achieved excellent performance (Gap < 0.5%) on the uniform TSP100. Our study focuses on a harder setting by using a dataset with diverse instances of varying distributions and scales, which allows us to assess whether a selection method can effectively identify the suitable solver for a wide range of instances. Thanks to your suggestion, we also provide the results of coordinating multiple neural solvers on the uniform TSP100, as shown in the following table. These results show that while selection on this dataset can still be effective, the potential improvement over the best single solver is limited, as single solvers already perform well on the uniform dataset. \\n\\n| Methods | Our synthetic dataset | Uniform TSP100 |\\n| --- | --- | --- |\\n| Single best solver | 2.33% | 0.29% |\\n| Oracle of multiple solvers | 1.24% | 0.10% |\\n\\nWe have revised to add these results (i.e., Table 14) in the new version. Regarding your second suggestion, we have also revised to report the original objective values (i.e., Tables 8 and 9). \\n\\n**Weakness7-1 Versatility of our framework**\\n\\nThanks for your valuable question. In this paper, we implemented the method on TSP and CVRP since these two representative problems are widely studied in the NCO community and have many diverse neural solvers for selection. Besides TSP and CVRP, our proposed selection framework is adaptable to other problems. For new problems, one only needs to customize the feature extraction component. For instance, when adapting our framework to scheduling problems, one can adjust the graph attention encoder according to MatNet [9] (i.e., add edge embeddings). Other components, like training loss and selection strategies, do not require changes for new problems. We have revised to add some discussion in Section 5. Thank you very much.\", \"references\": \"[1-7] correspond to the references you provided\\n\\n[8] UDC: A unified neural divide-and-conquer framework for large-scale combinatorial optimization problems. In Advances in Neural Information Processing Systems 37 (NeurIPS).\\n\\n[9] Matrix encoding networks for neural combinatorial optimization. In Advances in Neural Information Processing Systems 34 (NeurIPS).\"}", "{\"comment\": \"I appreciate the authors' response, which addressed many of my concerns. However, I still have the following reservations: Although the paper presents a general framework for neural solver selection, it lacks a breakthrough in method design compared to existing work. Many of the proposed selection strategies, such as Top-k, rejection-based, and Top-p selection, are direct applications of existing ensemble learning methods without introducing mechanisms that could substantially enhance selection efficiency or effectiveness. While these strategies do provide some improvement, I remain concerned that the contributions may not be sufficiently innovative to meet the standards of this conference. However, after considering all factors, I am willing to raise my score to 6.\"}", "{\"comment\": \"Thank you very much for your thorough response and new experimental results. I have also read other reviewers' comments and the corresponding responses. Since many of my concerns have been properly addressed, I raise my score to 6.\\n\\nThe major remaining concern is still on its connection to NCO. The main contribution of this work is more like 1) a new learning-based solver selection approach that can be used for different (neural or traditional) solvers and 2) a case study of using the proposed method for NCO, rather than a novel approach that can truly leverage the specific patterns/characteristic of NCO models for neural solver selection. The potential research directions briefly discussed in the conclusion section (feature extraction for neural solvers, runtime-aware selection, and actively training complementary solvers) are all important and could be very helpful in achieving this goal, but none of them have been done in this work.\\n\\nOn the other hand, I agree with the authors that solver selection is important for NCO, and this work might inspire more follow-up works on this research direction. Therefore, I vote to weakly accept this work (6).\"}", "{\"comment\": \"Thank you for reviewing our paper. We sincerely appreciate your valuable suggestions for refining our work. According to your comments, we have clarified our contributions and enriched our paper as you advised. Here are the detailed responses.\\n\\n**Response to Weakness1: The novelty and position of this paper**\\n\\nThanks for your valuable comments. There may be some misunderstanding regarding the intended position and focus of this paper, and we are grateful for the chance to elaborate. The primary contribution of our work lies in pioneering the integration of model selection into Neural Combinatorial Optimization (NCO) and demonstrating its effectiveness through extensive experiments.\\n\\nUnlike traditional methods, NCO methods leverage neural networks to build data-driven solvers, obtaining good optimality gaps with significantly superior inference efficiency. However, inspired by the No-Free-Lunch theorem, we investigated the instance-level performance of prevailing NCO solvers and found that they demonstrate clear complementarity. This phenomenon emphasizes the potential of combining the advantages of state-of-the-art neural solvers and motivates our proposal of adaptively selecting suitable solvers for each instance. Since our work is supposed to be a pioneer attempt at neural solver selection, our main goal is to verify the possibility and benefits of solver selection for NCO. In our experiments, we found that even a straightforward method using hand-crafted features and classification models can outperform the state-of-the-art neural solver, which strongly indicates that solver selection is a promising direction for NCO. We believe our work can benefit the NCO community and inspire future research in this area. For example, we have disscussed several future directions (e.g., designing better feature representation of neural solvers, exploring runtime-aware selection methods for neural solvers with different search budgets, and enhancing the solver pool by training) in Section 5. \\n\\nWe hope the above clarification has made the main contribution of this work clear. In fact, for the current method, there are also some meaningful technical advancements. For example, \\n\\n1. **The pooling-based hierarchical encoder.** We propose to design a graph pooling method to downsample representative nodes from the complete instance, which constructs hierarchical representations that are empirically proven robust when the problem distribution/scale shifts. To the best of our knowledge, this approach is new in NCO. \\n2. **The selection strategies.** Existing selection methods for optimization algorithms or machine learning models only focus on top-1 or top-k selection, while we propose two new selection strategies (i.e., rejection-based and top-p selection) by considering the confidence of the selection model. These strategies can adaptively select additional neural solvers for low-confidence instances, enhancing the robustness with minimal time consumption.\\n\\nThank you again for your thoughtful comments. We sincerely hope our response clarifies our contributions and the potential impact of our work. \\n\\n**Weakness2: The necessity of neural solver selection**\\n\\nThanks for your comment. As you noted, sequentially performing solvers on a target dataset can be effective. However, when new instances come, this approach requires rerunning all solvers on each instance. In contrast, a well-trained selection model can generalize to unseen instances in a zero-shot manner, efficiently selecting the most suitable solver for each instance. Thus, the selection model only needs to run the selected solver, and can be much more efficient than simple sequential execution (requiring running all solvers). We hope this addresses your concerns.\\n\\n**Weakness3: The ambiguity of \\u201cOPT\\u201d**\\n\\nWe are very sorry for the confusion. The results reported in our paper are just the optimality gap with respect to the output of HGS on CVRP and LKH3 on TSP. We used \\\"OPT\\\" to represent the performance of the best individual solver on each instance. To avoid misunderstanding, we have replaced \\\"OPT\\\" with \\\"Oracle\\\" in the revised version.\\n\\n**Weakness4: Discussion on the combination with traditional solvers**\\n\\nThank you for your constructive suggestion. We agree that combining powerful traditional solvers and neural solvers for enhanced performance is of significance to both communities. However, a key motivation of neural combinatorial optimization is that neural networks have an overwhelming speed advantage over heuristc methods, which can serve as alternatives to traditional solvers in scenarios that require time efficiency. In this paper, we follow this motivation and aim to improve the performance of neural solvers without sacrificing their efficiency. Thus, we did not involve traditional heuristc solvers in this work.\"}" ] }
CFKZKjrQ5r
FCoReBench: Can Large Language Models Solve Challenging First-Order Combinatorial Reasoning Problems?
[ "Chinmay Mittal", "Krishna Kartik", "Parag Singla", "Mausam ." ]
Can the large language models (LLMs) solve challenging first-order combinatorial reasoning problems such as graph coloring, knapsack, and cryptarithmetic? By first-order, we mean these problems can be instantiated with potentially an infinite number of problem instances of varying sizes. They are also challenging being NP-hard and requiring several reasoning steps to reach a solution. While existing work has focused on coming up with datasets with hard benchmarks, there is limited work which exploits the first-order nature of the problem structure. To address this challenge, we present FCoReBench, a dataset of 40 such challenging problems, along with scripts to generate problem instances of varying sizes and automatically verify and generate their solutions. We first observe that LLMs, even when aided by symbolic solvers, perform rather poorly on our dataset, being unable to leverage the underlying structure of these problems. We specifically observe a drop in performance with increasing problem size. In response, we propose a new approach, SymPro-LM, which combines LLMs with both symbolic solvers and program interpreters, along with feedback from a few solved examples, to achieve huge performance gains. Our proposed approach is robust to changes in the problem size, and has the unique characteristic of not requiring any LLM call during inference time, unlike earlier approaches. As an additional experiment, we also demonstrate SymPro-LM’s effectiveness on other logical reasoning benchmarks.
[ "llms", "logical-reasoning", "first-order-reasoning", "neuro-symbolic" ]
Reject
https://openreview.net/pdf?id=CFKZKjrQ5r
https://openreview.net/forum?id=CFKZKjrQ5r
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qYupyjHyg2", "XyTpmqL5hE", "T9sKJ71W2M", "NuEKmUJR8i", "Lv0QQEQxcr", "GmM8D55hI0" ], "note_type": [ "official_review", "decision", "official_review", "official_review", "official_review", "meta_review" ], "note_created": [ 1730698509170, 1737524299896, 1730053214020, 1730164252919, 1730580646533, 1734839537876 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14114/Reviewer_yJYh" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14114/Reviewer_3HGQ" ], [ "ICLR.cc/2025/Conference/Submission14114/Reviewer_8qGB" ], [ "ICLR.cc/2025/Conference/Submission14114/Reviewer_fByT" ], [ "ICLR.cc/2025/Conference/Submission14114/Area_Chair_WMQX" ] ], "structured_content_str": [ "{\"summary\": \"This paper focuses on the problem-solving ability of LLM on first-order combinatorial problems in natural language form, arguing that no existing benchmark could reveal this challange properly. To stress the significance of this issue, this paper proposes a new benchmark, FCoReBench, which covers 40 challanging problems in varying sizes and correspounding solutions. In responding to the poor performance of current LLMs on FCoReBench, this paper further proposes a new framework, SymPro-LM, to push forward the potential capacity of language models by combining symbolic solvers, program interpreters and the LM backbone. The experimental results show a significant improvement in various aspects, indicating the valuable attempt of assembling different augmented modules.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The problems covered in FCoReBench are relatively comprehensive, highlighting a valuable research direction. It would be interesting to see more generalized problems to be addressed once VLM are taken into consideration.\", \"A corresponding responce framework has been developed for the issue proposed, and the experimental results are promising.\", \"The experimental section in section 7 features thorough verification and comprehensive chart presentations.\", \"The discussion in section 8 is insightful. It would be benificial to list the problems in each situation in the appendix, and even better, to illustrated them with diagrams in the main text. This would help to elucidate the dataset's relevance to the central issue.\"], \"weaknesses\": [\"The construction part of the dataset issue in Section 4 requires manual labor, which is quite labor-intensive. Could it be automated using LLM?\", \"The current agent can only solve first-order logic. Higher-order logic requires individual generation, which is resource-intensive and difficult to scale.\", \"There is a lack of innovation in the proposed framework SymPro-LM, which merely combines existing symbolic solvers and program generation. It would be better to consider a more specific design.\"], \"writing_aspects\": [\"There are issues with the section layout and organization; the section titles are inconsistent and not uniformly formatted (e.g. section 5 and 5.1, section 7 and 7.1). The table layout on page 7 is also peculiar.\", \"The overall language used in writing is subpar, being rather colloquial and informal. E.g.:\", \"In Section 3, as a problem definition, there should not be such an emphasis on the subject \\\"We.\\\" The problem should be described objectively and rigorously from a third-party perspective.\", \"In Section 4, the term \\\"the author\\\" should be used less frequently to avoid potential privacy issues. Instead, use \\\"agent\\\" or \\\"process\\\" to emphasize actions rather than the actors, which would be more formal. If necessary, flowcharts can also be used to represent the selection, polishing, and construction processes, which would greatly assist readers in understanding the overall procedures.\", \"This paper primarily focuses on the benchmark, as emphasized in the title; thus, the experimental section should mainly focus on verifying the performance of the benchmark in various aspects. The current writing approach is centered around SymPro-LM. If this focus is to be maintained, the emphasis of the entire article should be placed on SymPro-LM.\"], \"questions\": [\"I do not understand the sentence in page 3, line 122: \\\"These dataset are not first-order i.e. each problem is accompanied with a single instance (despite the rules potentially being described in first-order logic).\\\"\", \"In page 3 line 151, does the training data $\\\\mathcal{D}_\\\\mathcal{P}$ condition on the previous $\\\\mathcal{C}$ given by different problems? In my understanding, different $(x, y)$ pair may have different $\\\\mathcal{C}$.\", \"In page 4 line 179-181: \\\"The rules were re-written to ensure that an LLM cannot easily invoke its prior knowledge about the same problem. For the same reason, the name of the problem was hidden.\\\" Why can't let LLM be aware of the given problem catagory?\", \"In page 4 line 191: \\\"did not contain any formal specifications or mathematical formulas\\\" I don't know why this rule is set.\", \"In the last paragraph of section 4, why does the instances in the test set are typically larger in size than those in training? Also, why does the number of instances in test dataset smaller than those in the train dataset?\", \"In section 5.1, what is the \\\"training\\\" object of SymPro-LM? More specifically, this framework does not need any nn training. What does it optimize during iterations?\", \"In page 6 line 318, why do you use different temperature on different solvers? Could you explain the detailed reason?\", \"In page 7 line 349, what is the \\\"random\\\" approach here?\", \"Some typos:\", \"In page 3 line 149: \\\"We are also provided\\\" -> \\\"We also provides\\\"\", \"In page 4 line 213: \\\"which takes as input\\\" -> \\\"which takes input as\\\"\", \"In page 5 line 266: \\\"This step is need\\\" -> \\\"This step is needed\\\"\", \"In page 10 line 504: \\\"but after\\\" -> \\\"only after\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces a problem set designed to assess LLMs' ability to solve first-order combinatorial reasoning problems. It argues that current symbolic-solver-aided LLMs perform poorly on this problem set and proposes a novel approach that combines a symbolic solver with a program interpreter to improve reasoning capabilities, demonstrating superior performance on the problems.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper aims to address an important problem. The proposed approach is conceptually sound, and the experimental results indicate promising improvements in the reasoning capabilities of LLMs when using the technique.\", \"weaknesses\": \"This paper has several critical issues that require the authors' attention:\\n\\n1. Misalignment Between Title and Content: While the title suggests a focus on the proposed problem set, the main body primarily discusses the technique, SymPro-LM. After reviewing the entire paper, it appears more as a technique paper rather than a benchmark paper. I suggest revising the title and reorganizing the structure to more accurately reflect its focus on methodology.\\n\\n2. Lack of Clarity on Incremental Contributions of the Problem Set: Although the problem set seems useful, the paper does not clearly articulate its unique contribution. Existing symbolic-solver-aided LLM approaches have already addressed similar reasoning problems, and some may have been tested on benchmarks containing first-order combinatorial reasoning problems. It is essential to compare the proposed problem set with these existing benchmarks, highlighting overlaps and differences. However, this paper provides limited detail on this aspect.\\n\\n3. Scope Restriction and Generalizability of the Technique: While the paper narrows its focus to first-order combinatorial reasoning problems, conceptually, the proposed technique has broader applicability across various reasoning tasks. Given the absence of any domain-specific adaptations, I recommend either expanding the paper\\u2019s scope and conducting a more comprehensive evaluation across diverse reasoning problems, or explaining the reason of the scope restriction.\\n\\n4. Use of an Outdated LLM in Evaluation: The LLM used in the evaluations appears a bit outdated. I suggest incorporating recent models, such as GPT-4o and o1, to provide a more relevant assessment.\\n\\n5. Unclear Criteria for Problem Selection in the Problem Set: The criteria for including specific problems in the problem set are not well-defined. For example, while the paper includes problems from the industry track of SAT competitions, it does not explain the exclusion of others (e.g., the main track). Furthermore, recent SAT competitions no longer feature an industry track, making the rationale for this selection unclear.\", \"questions\": \"What is the criteria of problem selection?\\nWhat is the overlap and difference between your problem set and existing problem set?\\nWhy does the paper restrict its scope within first-order combinatorial reasoning problems?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces FCoReBench, a benchmark designed to evaluate the capabilities of LLMs in solving first-order combinatorial reasoning problems. The benchmarks include NP-hard problem instances like graph coloring and knapsack, with varying instance sizes. Current LLMs struggle with these tasks, particularly as the problem size increases. To address this limitation, this paper proposes SymPro-LM, a hybrid approach that combines LLMs with symbolic solvers, enhancing performance by leveraging the strengths of both methods.\\n\\nThe proposed approach achieved a 21.61% improvement over few-shot prompting, a 3.52% improvement over Program-aided Language models (PAL), and a 16.83% enhancement over Logic-LM. Additionally, incorporating feedback from solved examples boosts SymPro-LM's performance by 21.02% after four rounds, compared to 12.5% for PAL. SymPro-LM also excels on three non-first-order logical reasoning benchmarks, outperforming existing baselines on two datasets and remaining competitive on the third, highlighting the effectiveness of integrating LLMs with symbolic solvers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Using LLM to solve logic puzzles and combinatorial problems is a very important and interesting direction. This paper contributes a well-established dataset for this field which can be valuable to the research community. The paper also proposes a framework that combines extant solvers such as Z3 with LLMs. The experiment results seem convincing and promising.\", \"weaknesses\": \"1. The name \\\"first order\\\" is a bit confusing. Does it mean it is related to first-order logic? If so, it would be great to elaborate on this connection. Otherwise, a more detailed definition should be provided. It is not clear from the paper what the difference is between first-order problems and second-order ones.\\n\\n2. Whether the level of contribution of this paper meets the standard of ICLR is questionable. It is not clear whether this paper proposed novel methodologies. The main contribution according to the paper seems to be the establishment of a dataset.\", \"minor\": \"The fonts in Figure 4 should be larger.\", \"questions\": \"What is the difference between first-order and non-first-order problems?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Introduces FCoReBench which consists of generators and evaluators for 40 combinatorial optimization problems such as sudoku, graph coloring etc. Evaluates existing prompting approaches and LLM augmentation approaches on the dataset. Proposes a new framework SymPro-LLM which when given a problem, output a program that converts the problem to symbolic representation, which is then passed to a symbolic solver to get the solution.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The proposed SymPro-LLM can work with different instances from the same first order combinatorial optimization problem without the need to re-evaluate using LLMs.\\n\\nThe proposed dataset is difficult for existing LLMs. The instances are based on combinatorial reasoning problems, which are mostly NP-Hard problems.\\n\\nThe proposed dataset is lifted such that unlimited new instances can be generated.\", \"weaknesses\": \"While I find the proposed approach of using LLM to output program to formulate models interesting, I am not convinced the experimentations conducted provide enough insight to LLM reasoning abilities. From the examples shown in figure 2, NL(C), NL(X), NL(Y) seems to be pseudo code for formulating the problem. The task of the LLM therefore becomes translating the pseudo code to python, which does not require the same level of reasoning as solving the problems.\\n\\nThe paper does not evaluate enough existing models for the new proposed benchmark dataset. For example, the state-of-the-art GPT-4o and GPT-o1 are not evaluated. The paper also include limited analysis of why the existing approaches fail on the proposed dataset. \\n\\nThe writing and presentation require more clarity and focus. For example, section 7 presents results across different LLM models, different frameworks/styles of prompting, different datasets/problem classes, and different experimental setups. It is unclear to me what the key takeaways from these results are.\", \"questions\": \"Is there a possibility of data contamination, where the LLMs have seen these combinatorial optimization problems in their training data, and therefore know how to formulate them easily?\\n\\nDo you have further insights on why Logic-LM performance is so much worse in Table 1? It is also formulating the model and offloading the reasoning to a solver similar to the proposed framework.\", \"table_1\": \"Considering the NP-Hard nature of the problems, how come random guess achieves over 20% accuracy? What are your expectations for the eventual performance of LLMs on this dataset?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces a new benchmark, FCoReBench, consisting of 40 combinatorial optimization problems whose constraints, inputs, outputs, and examples are all stated in natural languages. Additionally, this paper also proposes a new framework, SymPro-LM, which outperforms existing prompting methods like few-shot prompting and program-aided prompting. Introducing new datasets and prompting frameworks to improve the reasoning capability of LLMs are valuable contributions. However, several important concerns are not addressed properly. For instance, to what extent the 40 combinatorial problems are new compared to existing reasoning tasks? Although the problem is stated in natural language, the form is still rigid -- clear separations regarding constraints, I/O instructions, and examples have to be specified, making it not far from a piece of pseudo-code. Furthermore, the key motivation is not very clear; on one hand, it suggests the contribution of benchmark, the novelty of which is a bit questionable; on the other hand, the authors want to show the new framework, SymPro-LM, significantly outperforms existing techniques on a newly crafted benchmark. A more systematic comparison of existing benchmarks where baseline approaches were evaluated would be expected.\", \"additional_comments_on_reviewer_discussion\": \"As the authors did not respond, there was unfortunately no discussion.\"}" ] }
CEvGuwMum0
JudgeRail: Harnessing Open-Source LLMs for Fast Harmful Text Detection with Judicial Prompting and Logit Rectification
[ "Zhongjie Ba", "Hongye Fu", "Yiqi Yang", "Hui Chen", "Qinglong Wang", "Peng Cheng", "Zhan Qin", "Kui Ren" ]
Large language models (LLMs) simultaneously facilitate the generation and detection of harmful text. Leading LLM developers, such as OpenAI, Meta, and Google, are driving a paradigm shift in the detection of harmful text, moving from conventional detectors to fine-tuned LLMs. However, these newly released models, which require substantial computational and data resources, have not yet been thoroughly investigated for their effectiveness in this new paradigm. In this work, we propose JudgeRail, a novel and generic framework that guides open-source LLMs to adhere to judicial principles during text moderation. Additionally, we introduce a new logit rectification method that accurately interprets an LLM's classification intent, rigorously controls its output format, and significantly accelerates detection. By integrating several top-performing open-source LLMs into JudgeRail without any fine-tuning and evaluating them against OpenAI Moderation API, LlamaGuard3, ShieldGemma, and other conventional moderation solutions across various datasets, including those specifically designed for jailbreaking LLMs, we demonstrate that JudgeRail can adapt these LLMs to be competitive with fine-tuned moderation models and significantly outperform conventional solutions. Moreover, we evaluate all models for detection latency, a critical yet rarely examined practical aspect, and show that LLMs with JudgeRail require only 46% to 55% of the time needed by LlamaGuard3 and ShieldGemma. The generic nature and competitive performance of JudgeRail highlight its potential for promoting the practicality of LLM-based harmful text detectors.
[ "Large Language Model", "Harmful Text Detection", "Toxic Speech Detection", "Content Moderation" ]
Reject
https://openreview.net/pdf?id=CEvGuwMum0
https://openreview.net/forum?id=CEvGuwMum0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uHWvegIAa2", "rYEwxt0vm1", "qyfdTqDMeM", "kENX86gzRd", "d5S1vsuJJR", "b0xsc08lYL", "aKasy4Ty8e", "ZkWjsQQ5Q1", "ZWsFvzyyK1", "YWxUmUsJH6", "RtfPaTkp2U", "PGtP9PdZzf", "N2gjqHy7tm", "KgQSnRAxxR", "JdE7razfQX", "FFXLiMxVJ4", "9YipyB6GFB", "5g9utjdpGX", "0e7FqVMlKm" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1732545653580, 1730720529502, 1732360225395, 1732125980196, 1732126433259, 1730798185250, 1732126044962, 1734564036173, 1732546095559, 1732126156400, 1732125782088, 1732479454115, 1732126295675, 1730641408789, 1730944125466, 1732530836922, 1732547440287, 1737523715154, 1732126369138 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5595/Authors" ], [ "ICLR.cc/2025/Conference/Submission5595/Reviewer_yCFh" ], [ "ICLR.cc/2025/Conference/Submission5595/Authors" ], [ "ICLR.cc/2025/Conference/Submission5595/Authors" ], [ "ICLR.cc/2025/Conference/Submission5595/Authors" ], [ "ICLR.cc/2025/Conference/Submission5595/Reviewer_4vQg" ], [ "ICLR.cc/2025/Conference/Submission5595/Authors" ], [ "ICLR.cc/2025/Conference/Submission5595/Area_Chair_Z5cT" ], [ "ICLR.cc/2025/Conference/Submission5595/Reviewer_yCFh" ], [ "ICLR.cc/2025/Conference/Submission5595/Authors" ], [ "ICLR.cc/2025/Conference/Submission5595/Authors" ], [ "ICLR.cc/2025/Conference/Submission5595/Reviewer_MwTd" ], [ "ICLR.cc/2025/Conference/Submission5595/Authors" ], [ "ICLR.cc/2025/Conference/Submission5595/Reviewer_p6GN" ], [ "ICLR.cc/2025/Conference/Submission5595/Reviewer_MwTd" ], [ "ICLR.cc/2025/Conference/Submission5595/Reviewer_4vQg" ], [ "ICLR.cc/2025/Conference/Submission5595/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5595/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you for your feedback\", \"comment\": \"We appreciate your consideration of the comparisons with existing prompt-based approaches and the concerns raised by Reviewer p6GN. We would like to emphasize several key points:\\n\\n1. While prompt engineering is a common practice, our work demonstrates that simple prompting alone has limited effectiveness in enhancing text moderation, as evidenced by the results presented in our response to Reviewer p6GN. This finding underscores the necessity for more sophisticated designs. In particular, we have conducted extensive experiments with the label system, which not only highlight its impact on moderation performance, an area rarely explored in existing literature, but also demonstrate the flexibility and efficiency (in terms of fine-tuning cost) for adapting our prompt framework to accommodate new moderation categories or requirements.\\n\\n2. Our approach stands out due to its ability to achieve high performance with a single, streamlined prompt structure, which is closely integrated with our logit rectification mechanism. It enables a single round of detection with desirable performance. In contrast, other methods often require multiple LLMs or iterative rounds of reflection to achieve comparable performance, leading to considerable delays in processing individual text samples. Efficiency is crucial in real-world applications where latency and computational resources are critical factors.\\n\\n3. Regarding low-latency, our logit rectification mechanism can operate in tandem with tools such as vLLM, rather than being exclusive. Moreover, we believe it is a generic method that can be applied to other classification-oriented tasks, achieving both output format control and acceleration simultaneously. Therefore, we maintain that this aspect represents a core advantage of our method.\\n\\nWe believe these aspects collectively highlight the novelty and practical value of our approach. We appreciate your understanding and hope these clarifications provide additional context for your evaluation.\"}", "{\"summary\": \"This paper presents JudgeRail, a framework designed to adapt LLMs for detecting harmful content. The authors conduct extensive experiments comparing the performance of open-source LLMs with JudgeRail, against traditional harmful content detection models and commercial APIs. They introduce a logit rectification method to refine LLM outputs, ensuring more valid classifications and reducing latency. Results show that open-source LLMs equipped with JudgeRail perform comparably to commercial APIs and outperform conventional detection methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The research question is well-motivated, emphasizing the importance of detecting harmful content and preventing prompt jailbreaking in publicly deployed models. This paper effectively explores how to adapt open-source LLMs for better harmful content detection. In a landscape dominated by commercial APIs, it is crucial to investigate scalable and effective methods for open-source models. The results show that while open-source LLMs may not surpass commercial solutions in all respects, they can perform comparably, highlighting their value in content moderation efforts.\", \"They conduct extensive experiments to provide a robust comparison between open-source LLMs and both traditional and commercial detection models. Also, they explore the validity of their design choices very meticulously.\"], \"weaknesses\": [\"The novel logit rectification method has shown effectiveness on a limited set of examples. However, it is difficult to assess its overall impact on the framework's performance. The paper is missingcomparisons using simple prompts on the LLMs and ablation studies that evaluate performance with and without the logit rectification method, as these analyses could provide clearer insights into its contribution.\", \"While the paper is generally easy to understand, the experiments section is densely written, making it challenging to follow all the observations. For example, Section 4.3 would benefit from the inclusion of small headings or bolded paragraphs headings to better organize and group the observations, which would significantly enhance readability.\"], \"minor_comment\": [\"You use \\\"a LLM\\\" in many places (lines 183, 192 etc.), but I think it should be \\\"an LLM\\\". Please check with a native speaker and make correction if required\"], \"questions\": [\"Do you have any insights on how these open-source models operate without the JudgeRail framework? Are the LLMs capable enough with simple prompts, and does JudgeRail generally enhance their performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response (Part 1)\", \"comment\": \"We appreciate all reviewers for their valuable comments, suggestions, and questions. We have revised our manuscript accordingly to address these points. Additionally, a diff file is included in the supplementary material for the reviewers' convenience.\"}", "{\"title\": \"Response to Reviewer 4vQg (Part 1)\", \"comment\": \"Thank you for your review and the insightful comments.\\n\\n# Weakness 1\\n\\nWe initially focused on comparing with content moderation methods that are more suitable for practical use cases, leading us to select moderation tools and models (such as Perspective API, OpenAI Moderation API, ShieldGemma, and LlamaGuard3) released by companies with real moderation demands. We acknowledge the importance of empirically comparing with research-oriented moderation methods. Following your suggestion, we have attempted to include comparisons with SplineLLM and RigorLLM.\\n\\n\\nSplineLLM proposes to utilize the internal representations of LLMs to characterize a given prompt and generation. In SplineLLM, content moderation is a subtask, where an instance of SplineLLM is trained and tested on the same dataset, Jigsaw [1]. Meanwhile, RigorLLM integrates several different processes and components into its proposed framework, including energy-based hidden embedding data augmentation, optimization-based prompt suffix generation, and a fusion-based model combining robust KNN with LLMs. This indicates that RigorLLM has a relatively more sophisticated implementation. Indeed, when evaluating RigorLLM, we encountered difficulties in deploying RigorLLM, facing running and configuration issues. Due to limited time, we have not yet obtained the results. We will continue to work on reproducing it.\\n\\n\\n\\nIn contrast, we have successfully reproduced the SplineLLM approach and included the results in our experiments, as shown in the following table in markdown format. For AdvBench, we report the accuracy, while for other datasets, we report the F1-score. The latency is measured in seconds.\\n\\n\\n| Model/Dataset | HateCheck | HateXplain | OpenAI Mod | AdvBench | ToxicChat | Latency |\\n| --------------- | --------- | ---------- | ---------- | -------- | --------- | ------- |\\n| Martin-ha | 0.592 | 0.511 | 0.504 | 0.000 | 0.114 | 0.001 |\\n| ToxRoberta | 0.839 | 0.685 | 0.612 | 0.210 | 0.274 | 0.002 |\\n| S-nlp | 0.812 | 0.664 | 0.684 | 0.019 | 0.265 | 0.001 |\\n| Perspective API | 0.862 | 0.683 | 0.701 | 0.054 | 0.250 | 1.000 |\\n| OpenAI Mod. API | 0.934 | 0.744 | 0.790 | 0.104 | 0.254 | 1.030 |\\n| LlamaGuard3 | 0.926 | 0.720 | 0.791 | 0.979 | 0.497 | 0.159 |\\n| ShieldGemma | 0.892 | 0.729 | 0.794 | 0.612 | 0.684 | 0.191 |\\n| SplineLLM | 0.815 | 0.667 | 0.481 | 0.892 | 0.139 | 0.063 |\\n| GLM4(JR) | 0.894 | 0.719 | 0.714 | 0.729 | 0.385 | 0.102 |\\n| Mistral0.2(JR) | 0.884 | 0.706 | 0.676 | 0.950 | 0.586 | 0.088 |\\n| Gemma2(JR) | 0.910 | 0.746 | 0.756 | 0.992 | 0.584 | 0.098 |\\n\\n\\nAs previously mentioned, SplineLLM is trained and evaluated on the same dataset. \\nAs shown in the above table, the generalization of SplineLLM is limited when evaluated across different datasets. Furthermore, the performance of SplineLLM is more closely aligned with Martin-ha, ToxRoberta, and S-nlp, with a significant difference being its performance on AdvBench, which approaches that of several LLM-based moderation solutions. Due to the simplicity of SplineLLM, its serving latency is noticeably lower than that of LLM-based moderation solutions, yet still significantly higher than Bert-based moderation solutions.\"}", "{\"title\": \"Response to Reviewer p6GN (Part 3)\", \"comment\": \"# Question 1\\n\\nJudgeRail can be implemented on the GPT models and we have presented the evaluation results for GPT4 equipped with JudgeRail in the response above.\\n\\n\\n# Question 2\\n\\nThank you for your valuable question. We would like to demonstrate that JudgeRail can be applied to text-to-image models. We have evaluated JudgeRail with several text-to-image prompt datasets from prior research [5]. The results are shown below (reporting accuracy; the Template prompts dataset contains 30 samples, the Lexica prompts dataset contains 404 samples, and the other two datasets each contain 500 samples.):\\n\\n| Dataset | Template prompts | Lexica prompts | MS COCO prompts | 4chan prompts |\\n| --------- | ---------------- | -------------- | --------------- | ------------- |\\n| Gemma(JR) | 1.00 | 0.38 | 0.99 | 0.94 |\\n\\nWe will provide a description on utilizing JudgeRail for recognizing harmful prompts used for attacking text-to-image generation models and present the updated evaluation results in the appendix.\\n\\n\\n# Question 3\\n\\nSince we cannot reproduce ToxicDetector as previously mentioned, we can only compare to the latency reported in [2]. \\nSpecifically, we first referenced the results from [2] for the Perspective API, which reports a latency of 0.8 seconds. This latency is close to our measurements for the latency of calling Perspective API, which is around 1 second. Based on this consistent latency result, we observed that the ToxicDetector reported in [2] has a latency of 0.078 seconds, and our average latency ranges from 0.088 to 0.102 seconds. This demonstrates that ToxicDetector is faster than the proposed method by around 13%.\\n\\n# Question 4\\n\\nPlease refer to the response for weakness 3.\\n\\n\\n\\n[1] Realtoxicityprompts: Evaluating neural toxic degeneration in language models\\n\\n[2] Efficient Detection of Toxic Prompts in Large Language Models\\n\\n[3] Yongjin Yang, Joonkee Kim, Yujin Kim, Namgyu Ho, James Thorne, and Se-Young Yun. 2023. [HARE: Explainable Hate Speech Detection with Step-by-Step Reasoning](https://aclanthology.org/2023.findings-emnlp.365). In *Findings of the Association for Computational Linguistics: EMNLP 2023*, pages 5490\\u20135505, Singapore. Association for Computational Linguistics.\\n\\n[4] He X, Zannettou S, Shen Y, et al. You only prompt once: On the capabilities of prompt learning on large language models to tackle toxic content[C]//2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024: 770-787.https://arxiv.org/pdf/2308.05596\\n\\n[5]Yiting Qu, Xinyue Shen, Xinlei He, Michael Backes, Savvas Zannettou, and Yang Zhang. 2023. Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS '23). Association for Computing Machinery, New York, NY, USA, 3403\\u20133417. https://doi.org/10.1145/3576915.3616679\"}", "{\"summary\": \"The paper presents a moderation framework for harmful text detection using open-source LLMs. First, the framework uses prompting based on jurisdictal principles (assigning a judge role, using chain-of-thought prompting) and secondly, restrictis the logit output to a predetermined few-shot learned set of labels that are based on PerpepeciveAI, the OpenAI Moderation API and LlamaGuard3. The authors claim this is more efficient while performing almost on par with the closed models\\u2019 APIs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a simple and low-cost approach using prompting as a content moderation technique, along with controlled decoding. By adapting the prompting strategy of open LLMs models to the schema of the baseline APIs also including LlamaGuard3 and ShieldGemma, along with controlled decoding the authors achieve competitive performance without extra fine-tuning. This is interesting in spite of its simplicity. They provide a multi-faceted evaluation of false-positive classification.\\n\\nThe paper is generally well-written.\", \"weaknesses\": [\"While multiple aspect of the false-positive ratio are evaluated, I have several concerns regarding the evaluation.\", \"1. The reated work section mentions two recent approaches, SplineLLM and RigorLLM, which JudgeLLM is not compared to (only what the authors call conventional models based on BERT (martin-ha, toxberta, s-nlp, all bert-based); I find this comparison important. How does this approach differ from those two?\", \"2. Label set and constrained decoding:\", \"Lines 305-310: E.,g, \\u201cFor Perspective API, we converted its multi-label detection results into binary classification \\u201d: how does it perform using multi-label output?\", \"How much does the approach depend on the decoding vocabulary?\", \"I find a more detailed evaluation of the logit distribution important:\", \"different decoding vocabulary sizes\", \"a comparison to the performance using the overall logit distribution, without restrictions\", \"The authors used only 100 samples to create the decoding vocabulary (logit distribution): how much does performance depend on the sample size and does it change with more/less samples?\"], \"writing\": \"The phrasing in some parts is unneccessarily strong, e.g.: Lines 20-22: \\\"accurately interprets an LLM\\u2019s classification intent, rigorously controls its output format, and significantly accelerates detection\\\"\", \"questions\": \"A more general question:\\nInterestingly, this simple approach performs on par with closed models' APIs and also fine-tuned ones such as LlamaGuard3, while also being much more cost-efficient. What is your intuition on that? E.g, how much is it depending much on the task/datsets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 4vQg (Part 2)\", \"comment\": \"# Weakness 2\\n\\n(1) We chose to evaluate Perspective API by its binary classification performance, as Perspective API does not have a released, associated labeled multi-label dataset for fine-grained comparison. Therefore, to evaluate and compare JudgeRail in a multi-label setting, we selected the OpenAI Moderation API, which offers a labeled multi-label dataset.\\n\\n\\n\\n(2) We apologize for the misleading presentation. We would like to specify that we introduce the logit rectification mechanism primarily to extract valid formatted decisions, even when an LLM generates outputs that do not conform to JudgeRail's output format specification. In other words, the logit rectification mechanism serves as an error-handling mechanism, avoiding the need for post-output parsing-based error handling methods. Therefore, we used the 100 samples to validate that using logit rectification, compared to recognizing model decisions from its unstructured generated content via sophisticated parsing mechanisms, has minimal impact on the moderation performance.\\n\\n\\n(3) Following the above clarification, we would like to specify that we did not use the selected 100 samples to build a decoding vocabulary. The decoding vocabulary size depends on the label system used. For example, when using the Perspective API label system, the vocabulary can range from \\\"0\\\"-\\\"6\\\". When using the OpenAI Moderation API label system, the vocabulary can range from \\\"0\\\"-\\\"9\\\" and \\\"A\\\"-\\\"E\\\".\\n\\nFor the comparative experiment with and without the logit rectification, we will continue to test with larger scales, spanning 500 to 1000 samples, with an equal number of samples randomly sampled from all five datasets used in this work.\\n\\nWe will clarify the aforementioned misleading points in Section 3.2 of the revised manuscript.\\n\\n\\n\\n# Writing: \\n\\nThank you for the advice. We will revise the description in the corresponding section of the manuscript.\\n\\n# Question:\\n\\nThank you for raising this question, and we are glad to provide further clarification on our thoughts.\\n\\nFirst, we believe that an LLM trained and released for generic text generation tasks is commonly pre-trained on a vast amount of data, enabling the model to already understand certain common knowledge, such as those used in composing harmful text. Indeed, by examining most harmful text datasets, we find that the content moderation task primarily involves common sense and everyday language, with rather limited requirements for domain-specific knowledge.\\n\\nFurthermore, by surveying other literature, we have noticed that some prior research[2] has highlighted the questionable effectiveness of fine-tuning LLMs, as it primarily changes the style of the LLM's output rather than its knowledge.\\n\\nThe combination of these latter two observations and the former intuition guided our design of the prompting framework. Our goal is to leverage this common sense more effectively. Therefore, we chose to assign an LLM a commonly known Judge character, with the commonly acquired principle of \\\"presumption of innocence\\\", and validated our design through thorough experiments.\\n\\n\\n\\n[1] Adams, C., Jeffrey, S., Julia, E., Lucas, D., Mark, M.,Nithum, and Will, C. Toxic comment classification challenge, 2017.\\n\\n[2]Lin B Y, Ravichander A, Lu X, et al. The unlocking spell on base llms: Rethinking alignment via in-context learning[C]//The Twelfth International Conference on Learning Representations. 2023.\"}", "{\"metareview\": \"**Summary:**\\n\\nThe authors propose a LLM-based content moderation framework (JudgeRail) that prompts LLMs to assume the persona of a judge in detecting harmful text. This approach removes the need for additional fine-tuning, which the authors show leads to an efficient and effective system compared against other well-known content moderation solutions like LlamaGuard and ShieldGemma. \\n\\n**Strengths:**\\n\\n- The paper offers a solution to an important, timely problem (automated content moderation detection) \\n\\n- The discussion of detection latency and 4-bit comparison in the paper is interesting, and I believe the authors have a valid point about inference-time efficiency being an overlooked factor that impacts practicality of moderation solutions. Their results indicate the JudgeRail approach has improved latency over baselines (46% to 55% less time than LlamaGuard and ShieldGemma). \\n\\n**Weaknesses:**\\n\\n- It seems like the results are actually quite mixed for JudgeRail (table 2), and while there is lower latency, in practice I don't believe the difference would make a significant enough difference to offset the lower performance. Accuracy is more of a concern. For HateXplain, JudgeRail does better but the difference is marginal and doesn't appear to be statistically significant. \\n\\n- The experimentation lacks rigor. There were several baselines highlighted in the paper, but these were absent from comparison until the authors' rebuttal. Given the limited time for experimentation, the validity of these results is questionable. The paper would benefit from statistical significance testing. It also seems like the proposed approach would be highly brittle to the number and selection of in-context examples. While the GPT-4 labeling comparison is interesting, I'm not sure why the authors chose to denote the models' agreement with GPT-4 as the \\\"fixed\\\" results. Are the authors convinced these were errors in the original human-annotated dataset? If so, this needs better analysis than re-labeling with GPT-4, which has its own biases and may miss nuances that human annotators observe like dialectical variation or sarcasm. I do not find it very surprising that open LLMs would produce more similar results to GPT-4 than human annotators. \\n\\n- I do not agree with the authors that computational restrictions on training content moderation models is a major problem in content moderation. Training a 7b parameter model with the corpora sizes mentioned is feasible on smaller GPUs than an A100, and while this will be more time-consuming, even in industry there is rarely the need for continuous retraining. Additionally, there are many cloud computing resources available to researchers, sometimes at no cost. I think financial/psychological cost of label verification / human annotation for expanding existing corpora is a stronger argument to avoid retraining. \\n\\nOverall, I am not entirely convinced by either the results or motivation of the paper, and cannot recommend acceptance yet. However, I do think there are many positive aspects of the paper and the JudgeRail framework is a very solid, interesting idea. Any NLP/AI venue would be suitable for the paper, so I suggest the authors take a bit more time to refine it and resubmit.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers agreed about the importance and timeliness of the paper's focus on content moderation. They also highlighted the discussion of latency. The main concern of the reviewers was absent baselines. There are many new results in the rebuttal that address the reviewers' questions about baselines. I believe the inclusion of these results will significantly improve the paper and strengthen the authors' arguments about the effectiveness of their approach.\\n\\nI also believe the authors satisfactorily addressed in their rebuttal the comparisons to Jury Learning and approaches requiring ensembles of persona-driven models, however I agree with reviewer MwTd that the technical contribution of the paper could be improved.\"}", "{\"comment\": \"Thanks to the authors for their detailed responses and clarifications. Considering the responses provided by authors to all reviews, I continue to believe that this paper has merits. Therefore, I will maintain my current score.\\\"\"}", "{\"title\": \"Response to Reviewer yCFh\", \"comment\": \"Thank you for your valuable feedback and suggestions.\\n\\n# Weakness 1\\n\\nThank you very much for your valuable suggestions. As we have also responded to the 2nd reviewer's comments, for the comparative experiment with and without the logit rectification, we will continue to test with larger scales, spanning 500 to 1000 samples, with an equal number of samples randomly sampled from all five datasets used in this work. However, we would like to highlight that we introduce the logit rectification mechanism primarily to extract valid formatted decisions, even when an LLM generates outputs that do not conform to JudgeRail's output format specification. In other words, the logit rectification mechanism serves as an efficient and effective error-handling mechanism. Therefore, we used the 100 samples to validate that using logit rectification, compared to recognizing model decisions from its unstructured generated content via sophisticated parsing mechanisms, has minimal impact on the moderation performance.\\n\\nWe followed your suggestion and employed two simple prompt methods from several related studies [1] [2] to conduct experimental comparisons. Given that these methods are aimed at hate speech and toxic content, and due to time constraints, we selected three harmful text datasets for evaluation, as shown below (reporting F1-score):\\n\\n| Gemma2 | HateCheck | HateXplain | OpenAI Mod | Latency(s) |\\n| ------------- | --------- | ---------- | ---------- | ---------- |\\n| simple_COT[1] | 0.905 | 0.711 | 0.693 | 7.310 |\\n| simple[2] | 0.887 | 0.712 | 0.730 | 7.392 |\\n| JudgeRail | 0.910 | 0.746 | 0.756 | 0.098 |\\n\\nThis new evaluation result shows that simple prompting can relatively shape an LLM to obtain satisfactory performance. Meanwhile, JudgeRail maintains superior performance across all datasets and has a significant advantage in terms of latency.\\n\\n\\n\\n[1] Yongjin Yang, Joonkee Kim, Yujin Kim, Namgyu Ho, James Thorne, and Se-Young Yun. 2023. [HARE: Explainable Hate Speech Detection with Step-by-Step Reasoning](https://aclanthology.org/2023.findings-emnlp.365). In *Findings of the Association for Computational Linguistics: EMNLP 2023*, pages 5490\\u20135505, Singapore. Association for Computational Linguistics.\\n\\n[2] He X, Zannettou S, Shen Y, et al. You only prompt once: On the capabilities of prompt learning on large language models to tackle toxic content[C]//2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024: 770-787.https://arxiv.org/pdf/2308.05596\\n\\n# Weakness 2 and writing\\n\\nThank you for your valuable feedback regarding the organization of the experiments section. We will revise this section by adding clear subheadings or bolded paragraphs headings to better structure the observations and improve clarity. The typo \\\"a LLM \\\" will also be fixed to \\\"an LLM\\\". Thanks again for your kind advice. \\n\\n\\n\\n# Questions:\\n\\nThank you for raising this question, and we are glad to provide further clarification on our thoughts.\\n\\nSince an open-source LLM trained and released for generic text generation tasks is commonly pre-trained on a vast amount of data, we believe the model already understands certain common knowledge, such as those used in composing harmful text. Indeed, by examining most harmful text datasets, we find that the content moderation task primarily involves common sense and everyday language, with rather limited requirements for domain-specific knowledge. As such, we think these open-source models may be able to obtain satisfactory performance even without the JudgeRail framework.\\n\\nAccording to the result shown in the above table, as expected, an LLM equipped with simple prompting techniques demonstrates good moderation performance, while JudgeRail demonstrates a notable improvement on the dataset and also has a significant advantage in terms of latency. However, during this newly added experiment, we found that an LLM often does not output according to the format instructed in the prompt. This highlights the need for a mechanism, such as the proposed logit rectification, to complement the prompts, thereby better converting an LLM into a powerful content moderation tool.\"}", "{\"title\": \"Response to Reviewer MwTd\", \"comment\": \"Thank you for your valuable comments.\\n\\n# Weakness 1\\n\\nWe would like to clarify the design of our label system and will provide a more detailed description in the revised manuscript. To avoid arbitrary selection of harmful categories, we have adopted the categories defined by Perspective API and LlamaGuard3, which offer more fine-grained classifications compared to other reviewed moderation tools.\\n\\nTo be specific, the P1, P2, ..., P6 correspond to the Perspective API's harmful categories (in Table 1, where 1.Toxicity, 2.Severe toxicity, ..., 6.Threat). We incorporate all 6 categories in our label system when testing the content datasets. \\nAnd S1, S2, ..., S14 correspond to LlamaGuard3's harmful categories (in Table 1, where 1.Violent crimes, 2.Non-violent crimes, ..., 14.Defamation). We incorporate all 14 categories in our label system when testing the prompt datasets. \\n\\n# Weakness 2 \\n\\nThank you for providing this reference [1]. We acknowledge that in Jury Learning [1], dissenting voices are integrated by modeling individual annotators and allowing practitioners to define the jury composition. Additionally, we have identified another related work, Digital Juries [2], which proposes a civics-oriented approach for adjudicating content moderation cases. However, both approaches necessitate deploying multiple models, potentially consuming significant GPU memory resources and introducing a complex decision-making mechanism, which may result in additional latency.\\n\\nIn contrast, we named our framework JudgeRail, inspired by the role-playing scheme commonly adopted in developing jailbreak prompts. While one can assign an LLM a harmful character, which may be challenging due to more mature safety alignment, we can also assign an LLM a helpful character to combat harmful content. This character must be fair, driving us to select the principle of \\\"presumption of innocence,\\\" which naturally fits the \\\"Judge\\\" character and is a common-sense principle that most LLMs can understand and follow.\\n\\nMoreover, while sharing the spirit of introducing knowledge from the judicial system, practicality is a key consideration in JudgeRail. This motivates us to design a generic prompt framework that works with individual LLMs and incorporates the logit rectification mechanism, which accelerates processing and efficiently handles out-of-scope generation issues.\\n\\nWe will revise the manuscript to more clearly articulate our novelty and incorporate the newly provided references.\\n\\n# Question\\n\\nThank you for pointing out the less clarified statement. We will rephrase our presentation to highlight our observation regarding low-precision models. Specifically, our evaluation results, collected from three LLMs including Gemma2(JR), ShieldGemma, and LlamaGuard3, indicate that adopting their low-precision counterparts introduces rather limited performance impact. We find this observation intriguing, as it contrasts with studies [3,4] that have demonstrated the crucial role of model precision in generative tasks. This differing impact on model performance, driven by low-precision models, leads us to wonder that decision-making moderation tasks may have distinct requirements for model precision. We will provide a clearer presentation in the revised manuscript.\\n\\n[1] Mitchell L. Gordon, Michelle S. Lam, Joon Sung Park, Kayur Patel, Jeff Hancock, Tatsunori Hashimoto, and Michael S. Bernstein. 2022. Jury Learning: Integrating Dissenting Voices into Machine Learning Models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 115, 1\\u201319. https://doi.org/10.1145/3491102.3502004\\n\\n[2] Jenny Fan and Amy X. Zhang. 2020. Digital Juries: A Civics-Oriented Approach to Platform Governance. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1\\u201314. https://doi.org/10.1145/3313831.3376293\\n\\n[3]Li S, Ning X, Wang L, et al. Evaluating quantized large language models[J]. arXiv preprint arXiv:2402.18158, 2024.\\n\\n[4] Gong Z, Liu J, Wang J, et al. What Makes Quantization for Large Language Model Hard? An Empirical Study from the Lens of Perturbation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(16): 18082-18089.\"}", "{\"title\": \"Thanks the authors for their clarification\", \"comment\": \"Thanks the authors for their clarification. After reviewing the authors' response and other reviewers' review, I would like to keep my original score.\"}", "{\"title\": \"Response to Reviewer p6GN (Part 1)\", \"comment\": \"Thank you for your valuable feedback.\\n\\n# Weakness 1\\n\\nThank you for your suggestion. We will discuss the limitations from the following points: First, due to latency considerations, our current in-context learning mechanism is relatively simple. More complex mechanisms, such as Retrieval-Augmented Generation (RAG), will be explored in future work. Additionally, we have shown that the label system has a clear impact on detection performance, while existing label systems have some degree of semantic ambiguity. This inspires us to consider designing a more refined label system with better-separated semantic representations. Finally, while the detection capabilities of JudgeRail essentially depend on its underlying LLM, JudgeRail also benefits from this as the underlying model evolves in its capability.\\n\\n# Weakness 2&4 \\n\\nThank you for the reference, and we will include it in our related work. Following your suggestion, we employed the RealToxicityPrompts [1] dataset for evaluating several models used in our paper and the recommended ToxicDetector [2]. Since the ToxicChat dataset is also a toxic prompt dataset, we have included the test results for both datasets, as shown in the following table (reporting F1-score with the best performance):\\n\\n| Dataset\\\\method | Gemma2(JR) | LlamaGuard3 | ShieldGemma | Perspective API(Reproduce\\\\Report) | ToxicDetector[2] |\\n| ------------------- | ---------- | ----------- | ----------- | --------------------------------- | ---------------- |\\n| RealToxicityPrompts | 0.617 | 0.231 | 0.482 | 0.685\\\\0.8674 | 0.9628 |\\n| ToxicChat | 0.687 | 0.497 | 0.684 | 0.250 | - |\\n\\nAs shown in the above table, the reported performance of ToxicDetector outperforms other models on the RealToxicityPrompts by a large margin. We noticed from [2] that ToxicDetector is trained and evaluated on this RealToxicityPrompts dataset. As we have done for SplineLLM to respond to the second reviewer's comment, we would like to further evaluate the generalization performance of ToxicDetector. However, we find that this method has not been released to the public, making it difficult to evaluate ToxicDetector [2] on other datasets.\\n\\nIn the meantime, we have noticed that both [2] and our work have adopted Perspective API for comparison, and the performance of Perspective API on RealToxicityPrompts has been reported. We have evaluated Perspective API on RealToxicityPrompts with the same sample size to reproduce the reported result. However, as shown in the above table, our reproduced performance is significantly worse than the reported result. Such inconsistency makes it difficult to draw a reasonable comparison with the recommended ToxicDetector.\\n\\nNevertheless, we have evaluated our proposed method, along with other two LLM-based moderation models, on this RealToxicityPrompts dataset. We have also presented our previous evaluation results obtained on ToxicChat in the above table, as both datasets contain prompt-type harmful text samples.\"}", "{\"summary\": \"The paper introduces JudgeRail, a framework designed to enhance harmful text detection using open-source large language models (LLMs) without requiring extensive fine-tuning. By leveraging \\\"judicial prompting\\\" and a novel \\\"logit rectification\\\" technique, JudgeRail ensures accurate text classification and significantly reduces detection latency. Evaluated against established tools like OpenAI\\u2019s Moderation API and specialized models like LlamaGuard3 and ShieldGemma, JudgeRail-equipped LLMs demonstrated competitive performance while achieving faster processing times (only 46-55% of the time required by other advanced models).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1) Introducing label systems, novel logit rectification method and calibration is really interesting and found to be helpful.\\n\\n2) The evaluation and baselines includes the state of the art models.\\n\\n3) The latency seems be a strength of the proposed model.\", \"weaknesses\": \"1) No limitations section\\n\\n2) No comparison with prompt detection techniques. Prompt detectors are simple and easy to integrate as well as latency is quite low.\\n\\n3) Lack of enough baselines. Readers and researchers will be interested in comparison with prompting techniques which are much simpler to execute like baselines with only chain of thought prompting and other advanced prompting techniques.\\n\\n4) The datsets implemented looks irrelevant hateful prompts differ from hate speech and hatexplain. please refer[1],[2] and it would be great if you can implement those.\\n\\n5) The novel method cannot be implemented on the GPT models which is the model used by most. This makes the model cannot be utilised by many who are dependent on GPT models.\\n\\n[1] Realtoxicityprompts: Evaluating neural toxic degeneration in language models\\n[2] Efficient Detection of Toxic Prompts in Large Language Models\", \"questions\": \"1) Can you implement the proposed model on the GPT models.\\n\\n2) Can this be applicable for vison models as well like text to image generation. please write a section in appendix.\\n\\n3) Is the proposed method much faster than prompt detection techniques?\\n\\n4) Why there is no evaluation comparison with prompting techniques? Though they might look simple readers wouldlike to their evaluation metrics as well.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces JudgeRail, a framework designed to guide open-source language models in detecting harmful text through judicial prompting and a novel logit rectification technique. The study compares JudgeRail\\u2019s effectiveness and efficiency against existing moderation tools, showing that JudgeRail enhances detection accuracy and latency without requiring fine-tuning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Significance: The paper tackles a critical issue, harmful text detection, which is increasingly urgent as LLMs are deployed widely in real-world applications.\\n\\n2. Comprehensive Evaluation: The paper includes comparisons with state-of-the-art models and examines latency, a rarely explored aspect in text moderation studies, providing practical insights for real-world applications.\", \"weaknesses\": \"1. Insufficient Justification: The paper does not adequately justify the specific harmful categories used in the label system, which could limit the generalizability of its results. What are P1, P2, etc., and S1, S2, etc?\\n\\n2. Lack of Novelty in Core Concept: Judge framework in content moderation is not new. There are some missed literature already explored this concept and implemented. Such as:\\n\\nMitchell L. Gordon, Michelle S. Lam, Joon Sung Park, Kayur Patel, Jeff Hancock, Tatsunori Hashimoto, and Michael S. Bernstein. 2022. Jury Learning: Integrating Dissenting Voices into Machine Learning Models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 115, 1\\u201319. https://doi.org/10.1145/3491102.3502004\\n \\nIt would be important for the authors to justify their approach's novelty and difference with previous relevant work.\", \"questions\": \"It is not clear why the paper claims that \\\"These findings suggest that text moderation tasks have lower requirements for high-precision computing compared to text generation tasks.\\\" Low precision is the results, how could it explain the requirement?\\nHowever, achieving acceptable results with lower precision does not inherently justify that text moderation has lower precision requirements. \\nInstead, it simply indicates that this particular framework, JudgeRail, was effective in the given tests. The paper would benefit from clarifying the distinction between observed outcomes and actual task requirements, and ideally, providing evidence or reasoning that explains why moderation tasks inherently need less precision compared to generation tasks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for the response. After reading the comparison with different baselines, including existing prompt-based approaches as following by reviewer p6GN\\u2019s concerns (not mentioned in the original manuscript), I find many similar methods with comparative performance exist.\\nAs I find that hampering the novelty of this approach mainly to low-latency (which could be mitigated using inference-focused tools such as vllm), I will keep my original score.\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"We appreciate your comments and will revise our manuscript accordingly.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer p6GN (Part 2)\", \"comment\": \"# Weakness 3\\n\\nAs we have also responded to the 3rd reviewer's comments, we employed two simple prompt methods from several related studies [3] [4] to conduct experimental comparisons. Given that these methods are aimed at hate speech and toxic content, and due to time constraints, we selected three harmful text datasets for evaluation, as shown below (reporting F1-score):\\n\\n\\n\\n| Gemma2 | HateCheck | HateXplain | OpenAI Mod | Latency(s) |\\n| ------------- | --------- | ---------- | ---------- | ---------- |\\n| simple_COT[3] | 0.905 | 0.711 | 0.693 | 7.310 |\\n| simple[4] | 0.887 | 0.712 | 0.730 | 7.392 |\\n| JudgeRail | 0.910 | 0.746 | 0.756 | 0.098 |\\n\\nThis new evaluation result shows that simple prompting can relatively shape an LLM to obtain satisfactory performance. Meanwhile, JudgeRail maintains superior performance across all datasets and has a significant advantage in terms of latency.\\n\\n\\n\\n# Weakness 5\\n\\nActually, our proposed JudgeRail can be applied to GPT models. We chose not to include GPT models in our original experiments since we aim to propose a solution that uses open-source LLMs, which have comparable model sizes to existing LLM-based moderation tools, such as LlamaGuard3 and ShieldGemma. Nevertheless, to better present the generalization capability of JudgeRail, we have evaluated the performance of using GPT4 with JudgeRail on three datasets and present the results in the following table. We selected these datasets since their data sizes are relatively small and cause relatively lower costs of calling the GPT4 API.\\n\\n| Dataset | AdvBench | HateCheck | OpenAI Mod |\\n| ---------- | -------- | --------- | ---------- |\\n| GPT4(JR) | 0.988 | 0.821 | 0.732 |\\n| Gemma2(JR) | 0.992 | 0.910 | 0.756 |\\n\\nAs shown in the above table, GPT4 equipped with JudgeRail -- GPT4(JR) obtains comparable performance to our best-performing Gemma2(JR) on AdvBench and OpenAI Moderation datasets, while performing worse on the HateCheck dataset. By examining the samples mistakenly classified by GPT4(JR), we find that, while HateCheck primarily focuses on hate speech, some of its samples labeled as Non-hate still contain offensive materials. This type of content is often recognized as harmful by GPT4. This aligns with our previous findings mentioned in the paper regarding the ambiguity in the label system and the inaccuracies in dataset labels.\"}" ] }
CEE9cAQJ10
A Graph-Based Synthetic Data Pipeline for Scaling High-Quality Data
[ "Jiankang Wang", "Jianjun Xu", "Xiaorui Wang", "Yuxin Wang", "Mengting Xing", "Shancheng Fang", "Zhineng Chen", "Hongtao Xie" ]
Synthesizing high-quality data for continual training has been proven to be effective in enhancing the performance of Large Language Models (LLMs). However, previous synthetic approaches struggle to easily scale up data and incur high costs in the pursuit of high quality. In this paper, we propose the Graph-based Synthetic Data Pipeline (GSDP), an economical and scalable framework for high-quality reasoning data synthesis. Inspired by knowledge graphs, we extracted knowledge points from seed data and constructed a knowledge point relationships graph to explore their interconnections. By exploring the implicit relationships among knowledge, our method achieves $\times$255 data expansion. Furthermore, GSDP led by open-source models, achieves synthesis quality comparable to GPT-4-0613 while maintaining $\times$100 lower costs. To tackle the most challenging mathematical reasoning task, we present the GSDP-MATH dataset comprising over 1.91 million pairs of math problems and answers. After fine-tuning on GSDP-MATH, GSDP-7B based on Mistral-7B achieves 37.7\% accuracy on MATH and 78.4\% on GSM8K, demonstrating the effectiveness of our method. The dataset and models trained in this paper will be available.
[ "Large Language Models", "Mathematical Reasoning", "Data Synthesis" ]
Reject
https://openreview.net/pdf?id=CEE9cAQJ10
https://openreview.net/forum?id=CEE9cAQJ10
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zcvLLreT5H", "xiKy6eEfDU", "w0UCMe9lc2", "tRZDSwYjt8", "qM3bCUvUkf", "oyuW6Ugeb8", "leRtiWFckF", "lEIQQRiNQ4", "iS6PDKDHXA", "hcJvu2Vwcv", "glKoX9lrKJ", "g4SPYcckOJ", "byTRt966WS", "aBhJcLMZ4y", "aAfrmb0Zv0", "YFhMHZgRRp", "Su31wNe099", "Qr58mN6gC1", "QMukAvekBp", "OcJsZXAZYA", "KgAUdQZJWk", "KPvGcgEaUc", "IaimE8GvFE", "FSRNuR5Z9N", "DpEG6928Cg", "CyyYPZAppA", "Cn5BNCV1RR", "8zHgmmCe1z", "6g0QGdX6CL", "4IvJYrWRGi", "0JIX27utY0" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732009219292, 1732092902553, 1730694402731, 1732539720722, 1732009066351, 1737523489656, 1732787178443, 1732973009446, 1731762001458, 1732111078956, 1731931286820, 1730714827253, 1732787932905, 1732713836075, 1732539825645, 1733107417446, 1734121662655, 1732539783684, 1732009091843, 1732092041842, 1733057901175, 1732539805456, 1730829663544, 1732567361743, 1732092649194, 1733160872754, 1733106422751, 1732111363814, 1732787600453, 1732588423417, 1732458729424 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Reviewer_dEUx" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Reviewer_867H" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Reviewer_coVk" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Area_Chair_1dBV" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Reviewer_coVk" ], [ "ICLR.cc/2025/Conference/Submission2174/Reviewer_dEUx" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Reviewer_coVk" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ], [ "ICLR.cc/2025/Conference/Submission2174/Authors" ] ], "structured_content_str": [ "{\"title\": \"Question 4\", \"comment\": \"We selected Mistral-7B, a baseline model most commonly used in previous methods, for comparison with other approaches. Additionally, we chose two relatively new baseline models (LLaMA3-8B and Qwen1.5-7B) to demonstrate the versatility of our method. There are few studies using these two baseline models: LLaMA3-8B is primarily used for comparisons with this year's MammoTH2-8B, and currently, no methods use Qwen1.5-7B as a benchmark model. However, our experimental results clearly show that models based on Qwen1.5-7B and LLaMA3-8B significantly outperform the baseline.\\n\\nFurthermore, to avoid any potential misunderstandings, we have restructured the main table. Please refer to Table 1 in the revision.\"}", "{\"title\": \"Acknowledgement\", \"comment\": \"Finally, we would like to express our sincere gratitude for your valuable feedback. Based on your suggestions, we have made revisions in Section 3.6 regarding the testing of model generalization ability, highlighted in brown. Additionally, we have included Appendix C to provide detailed explanations of the experiments.\\n\\nIf you have any further questions or concerns, please feel free to let us know.\"}", "{\"summary\": \"This paper proposes the Graph-based Synthetic Data Pipeline (GSDP), a scalable and cost-effective framework for synthesizing high-quality data by leveraging knowledge graphs to expand data and reveal implicit relationships. GSDP, shown to generate synthesis quality comparable to GPT-4-0613 at a fraction of the cost, achieves strong performance in mathematical reasoning tasks, with the GSDP-7B model reaching 37.7% accuracy on MATH and 78.4% on GSM8K.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. strong experimental performance\\n2. interesting research problem\", \"weaknesses\": \"1. The motivation is not clear enough, authors point out the limitations of existing method, including limited scalability, high cost, and similarity to seed data. However, the similarity to seed data remains questionable and lack a quantitative investigation. Moreover, why the graph-based synthetic method can solve these limitations is not very clear in the introduction section.\\n2. In the KP extraction process, did authors check the quality of the generated KPs and the impact of each filter and clustering operation to the quality of KPs.\\n3. Would be nice to conduct study to present the diversity of the generated dataset is better than other synthetic methods.\\n4. The baseline results based on Qwen and Llama-3 models are missing, would be nice to present these results.\\n5. Would be nice to conduct case study to show how the KP graph works and show the superiority compared to existing methods.\", \"questions\": \"please see the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewers,\\n\\nWe have carefully addressed your suggestions and questions, updated the paper, and submitted the revised version. The main changes are highlighted in different colors for your convenience. We kindly request you to review the revision.\\n\\nAs the deadline is approaching, we hope you can let us know if there are any further issues so that we can address them promptly.\\n\\nThank you very much!\"}", "{\"title\": \"Question 1&3\", \"comment\": \"We compared GSDP-MATH with MetaMath, MathCoder, and MathScale, which are open source datasets (detailed information on these datasets can be found in Table 1), in terms of **seed similarity** and **data diversity**, as well as the supplement to the graph-based synthetic method:\\n- **Quantitative Analysis of Seed Similarity**:\\n - **Methodology**: We employed the embedding model ([Bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)) to measure the similarity between synthetic data and seed data. Specifically, for each synthetic data instance, we calculated its closest match in the seed data and recorded the highest similarity score. By aggregating the similarity scores for all synthetic data instances, we plotted histograms to visualize the distribution of similarity scores and analyze the similarity between the datasets. (**We encourage the reviewer to refer to Appendix E (Figure 5, 6, 7 and 8) of the revision, where four histograms provide a comprehensive comparison of \\\"seed similarity\\\" across datasets.**)\\n - **Results**: The results indicate that the similarity scores between GSDP-MATH and the seed data predominantly fall within the range of 0.55 to 0.65. In contrast, the similarity scores for MathScale predominantly fall within the range of 0.8 to 0.9, while those for MetaMathQA and MathCodeInstruct are concentrated around 1. This demonstrates that our synthetic data exhibits lower similarity to the seed data, whereas the other methods generate data that is highly similar to their respective seed datasets.\\n - **Analysis**: This is because MetaMath and MathCoderInstruct rely heavily on seed data, resulting in synthesized data that is very similar to the seed data. MathScale synthesizes data based on knowledge points, which reduces overall similarity and creates a more uniform data distribution. However, because it does not fully explore implicit relationships between knowledge points (non-co-occurrence knowledge points), there is still a significant amount of data similar to the seed data. In contrast, GSDP takes into account both explicit relationships (co-occurrence knowledge points) and thoroughly explores implicit relationships. This allows our method to synthesize datasets that have a more uniform distribution and lower overall similarity, with almost no data being very similar to the seed data.\\n\\n\\n- **Quantitative Analysis of Data Diversity**:\\n - **Methodology**: Given that the smallest dataset among the four contains 80K samples and the overall data volume is relatively large, we uniformly and randomly sampled 80K instances from each dataset and computed their embeddings. For each subset of embeddings, we performed clustering and compared the number of cluster centers to assess the differences in data diversity across datasets. We adopted the density-based DBSCAN algorithm and utilized the k-distance graph to determine key DBSCAN parameters (e.g., \\u03b5), ensuring a more scientifically grounded adaptation to the characteristics of each dataset.\\n - **Results**: As shown in Table 1, our method resulted in a greater number of cluster centers, indicating higher diversity within the GSDP-MATH dataset. This finding highlights the richness of our synthetic data.\\n - **Analysis**: Synthesizing data based on seed data or co-occurrence knowledge points often results in problems of the same type as the seed data. In contrast, our method generates new types of problems, thereby increasing the diversity of problem types in our dataset.\\n\\n\\n *Table 1: Information of key attributes across various datasets. The \\\"Sample Size\\\" column indicates the number of instances sampled from each dataset for clustering analysis, and \\\"Number of Clustering Centers\\\" represents the number of distinct clusters identified in the dataset.*\\n\\n | Method | Seed | Synthesized Data | Size | Sample Size | Number of Clustering Centers |\\n |------------|-------------|------------------|-------|-----|------------------------------|\\n | MetaMath | GSM8K+MATH | MetaMathQA | 395K | 80K | 339 |\\n | MathCoder | GSM8K+MATH | MathCodeInstruct | 80K | 80K | 271 |\\n | MathScale | MWPBENCH | MathScale | 2M | 80K | 488 |\\n | GSDP | MATH | GSDP-MATH | 1.9M | 80K | 541 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Acknowledgements and Clarifications\", \"comment\": \"Thank you very much for your positive feedback on our work. We have addressed your concerns in our responses to other reviewers. Here, I will provide a detailed reply focusing on two aspects to address some potential concerns you might have: \\\"Automation Level of the Pipeline\\\" and \\\"Additional Clarifications of Our Work.\\\"\\n\\n### **1. Automation Level of the Pipeline**\\n\\nOur entire data synthesis pipeline is fully automated, requiring only the pre-design of prompts and synthesis algorithms.\\n\\n**(1) Screening of KPs**\\n\\nRegarding your concerns about the knowledge point screening process, we have elaborated on the \\\"dual filtering\\\" screening strategy in Section 2.2 of our paper. After designing the prompts, input-output, and algorithm process, the entire procedure is handled by an embedding model and an LLM. Given the limited number of KPs (<10k), the process does not require significant computational time or manual intervention.\\n\\nHere is a brief introduction to the \\\"dual filtering\\\" strategy. For more details, please refer to **Section 2.2** of the revision and **Appendix D.1**.\\n\\nThe \\\"dual filtering\\\" strategy leverages embedding models and LLMs to remove low-quality and duplicate knowledge points:\\n- **Removal of low-quality knowledge points**: The LLM is used to filter out KPs that are ambiguous, contain mathematical errors, do not conform to mathematical standards, or are overly detailed (involving specific problem requirements).\\n- **Deduplication of knowledge points**: First, the embedding model processes the KPs to calculate the similarity between them. KPs with a similarity score between 0.9 and 1.0 are considered similar; those with a similarity score between 0.7 and 0.9 need further confirmation through the LLM to exclude those that appear similar but are actually different, e.g., \\\"Geometric sequence\\\" (similarity score: 0.805) vs. \\\"Arithmetic sequence,\\\" and \\\"Sine function in trigonometry\\\" (similarity score: 0.865) vs. \\\"Cosine function in trigonometry.\\\" KPs with a similarity score below 0.7 are considered dissimilar. Then, we group all similar KPs into categories and use the LLM to select or generate the best representative KP for each category.\\n\\n**(2) Automation Level of Other Parts of the Pipeline**\\n\\nThe extraction of knowledge points, selection of reasonable KP combinations, synthesis of problems and answers, and final quality checks in the pipeline are all completed using pre-designed prompts and synthesis algorithms, with no need for manual selection or verification. The manual verification of the reasonableness and effectiveness of prompts and algorithm designs is an essential part of every work and cannot be avoided.\\n\\n### **2. Additional Clarifications of Our Work**\\n\\nBased on the suggestions from the other two reviewers (**one of whom, dEUx, has positively evaluated our work**), we have conducted additional experiments included in the revision's Appendix and main text. Here is a summary addressing some potential concerns:\\n\\n**(1) Out-of-Domain Reasoning Ability Test**\\n\\nIn **Section 3.6** and **Appendix C**, we tested the out-of-domain reasoning ability of the GSDP model. We selected multiple test sets from various fields (such as ARC-C, MMLU-STEM, GPQA, BBH, TheoremQA, and MBPP) to evaluate the model's reasoning capability in physics, chemistry, biology, and computer science (We call them scientific reasoning). Although the model was trained only on GSDP-MATH, it showed an average improvement of over **5%** (with a maximum improvement of **10.8%**) in scientific reasoning ability tests, indicating that GSDP-MATH not only significantly enhances the model's mathematical reasoning ability but also boosts its out-of-domain reasoning skills.\\n\\n**(2) Quantitative Analysis of the Dataset**\\n\\nIn **Appendix E**, we conducted a quantitative analysis of the dataset's \\\"seed similarity\\\" and \\\"data diversity.\\\" The results in Figure 5 and Table 6 of the revision show that GSDP-MATH has lower seed similarity and higher data diversity.\\n\\n**(3) Other Modifications**\\n\\nIn the revision, we added a case study to better illustrate the GSDP workflow and its advantages over other methods. Additionally, we made it clearer in the introduction the importance of KPRG and how the Graph-based method addresses the issues of limited scalability, high cost, and similarity to seed data in existing methods. (**Appendix D.2, D.3, Figure 3**)\\n\\n### **If the above clarifications have addressed your concerns, we hope you might consider championing our paper. We would greatly appreciate your support for our work. Thank you once again for your feedback and consideration!**\"}", "{\"title\": \"Further Clarifications and Acknowledgement\", \"comment\": \"**If the above clarifications have addressed your concerns, we hope you might consider supporting our paper. We would greatly appreciate your support for our work. If you have any further questions, please do not hesitate to contact us. Thank you once again for your feedback and consideration!**\"}", "{\"title\": \"Methods Comparison and Cross-Domain Task Analysis\", \"comment\": \"**Thank you for the reviewer\\u2019s comments and feedback. Below is our detailed response:**\\n\\n---\\n\\n### I. Regarding the comparison between the two methods suggested by the reviewer\\n\\nData-driven and reinforcement learning-based methods are two key approaches to enhancing the reasoning capabilities of large language models (LLMs). Our method improves reasoning capabilities by training the model on synthesized large-scale, high-quality, and diverse mathematical data. In contrast, the Critical Plan Step Learning (CPL) method employs reinforcement learning, combining Monte Carlo Tree Search with Step-level Advantage Preference Optimization to explore and learn critical planning steps. We believe our approach offers the following distinct advantages:\\n\\n1. **Lower Complexity:**\\n \\n The data-driven method is relatively straightforward, requiring neither complex algorithms nor intricate training procedures. Instead, it focuses on collecting or synthesizing large amounts of high-quality data. This simplicity makes the approach easier to implement and scale, while also streamlining the learning process for the model.\\n \\n2. **Scalability:**\\n \\n Our approach efficiently synthesizes large-scale datasets from a small amount of seed data. As described in the paper, we achieved a 255x expansion (from 7.5k to 191M examples). In contrast, the data construction process in reinforcement learning is more intricate, making large-scale data expansion more challenging.\\n \\n3. **Generalizability:**\\n \\n Our method has demonstrated its generalizability across various LLMs, including Mistral-7B, LLaMA3-8B, and Qwen1.5-7B, all of which showed improved performance. On the other hand, CPL's experimental results are limited to DeepSeek-Math-Base and do not showcase its effectiveness across different models.\\n \\n---\\n\\n### II. Regarding the generalization capability issue raised by the reviewer\\n\\nWe deeply understand your concern about generalization capability and have conducted detailed experimental validations in our response to address this. Specifically, we analyzed the out-of-domain reasoning capability by dividing it into two parts: **mathematical reasoning** and **cross-domain reasoning**. Here are the details:\\n\\n1. **Mathematical Reasoning:**\\n \\n Our synthetic data was generated based on the training set of the Math dataset. Strictly speaking, the evaluation benchmarks (e.g., GSM8K, SVAMP, GAOKAO) are out-of-domain test datasets. As shown clearly in the main results of the paper, the model's performance on these mathematical reasoning tasks improved significantly after fine-tuning with GSDP-MATH.\\n \\n2. **Cross-Domain Reasoning:**\\n \\n To evaluate cross-domain reasoning capability, we employed testing scripts provided by [MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) and [OpenCompass](https://github.com/open-compass/opencompass) to test the fine-tuned GSDP model. The GSDP model is based on Mistral-7B, LLaMA3-8B, and Qwen1.5-7B, and it is fine-tuned only with GSDP-MATH. We evaluated the model across multiple benchmarks, including ARC-C, GPQA, BBH, MMLU-stem, TheoremQA, and MBPP. As shown in the table 1, the model did not experience any decline in cross-domain reasoning capability due to large-scale data training. On the contrary, it achieved performance improvements across multiple benchmarks.\\n \\n*Table 1: Main results on in-domain and out-of-domain reasoning tasks. The \\u0394 rows highlight the improvements of the GSDP models over their corresponding baseline models.*\\n\\n| Model | MATH | GSM8K | ARC-C | MMLU-stem | GPQA | BBH | TheoremQA | MBPP |\\n|----------------|------|-------|-------|-----------|------|------|-----------|------|\\n| Mistral-7B | 11.2 | 36.2 | 74.2 | 50.1 | 24.7 | 55.7 | 19.2 | 47.5 |\\n| GSDP-7B | 37.7 | 78.4 | 78.8 | 58.3 | 32.3 | 60.3 | 25.6 | 54.8 |\\n| *\\u0394 Mistral-7B* | *+26.5* | *+42.2* | *+4.6* | *+8.2* | *+7.6* | *+4.6* | *+6.4* | *+7.3* |\\n| LLaMA3-8B | 21.3 | 54.8 | 78.6 | 55.6 | 27.2 | 61.1 | 20.1 | 54.9 |\\n| GSDP-8B | 37.2 | 76.5 | 80.5 | 60.8 | 30.8 | 63.7 | 24.2 | 58.4 |\\n| *\\u0394 LLaMA3-8B* | *+15.9* | *+21.7* | *+1.9* | *+5.2* | *+3.6* | *+2.6* | *+4.1* | *+3.5* |\\n| Qwen1.5-7B | 13.3 | 54.1 | 75.6 | 45.5 | 26.7 | 45.2 | 14.2 | 52.1 |\\n| GSDP-Qwen-7B | 36.8 | 73.4 | 79.2 | 56.3 | 29.8 | 50.3 | 21.6 | 52.5 |\\n| *\\u0394 Qwen1.5-7B* | *+23.5* | *+19.3* | *+3.6* | *+10.8* | *+3.1* | *+5.1* | *+7.4* | *+0.4* |\\n\\nIn summary, large-scale, high-quality, and diverse training data genuinely enhance the model\\u2019s reasoning capabilities across various domains. This conclusion is strongly supported by the quantitative evaluation results, further confirming that our research direction of exploring better data synthesis methods is both correct and highly meaningful.\\n\\n---\\n### III. Regarding typos errors\\n\\nWe sincerely apologize for the typos errors due to our oversight and will release a revised version to correct all inaccuracies.\"}", "{\"title\": \"Response to Questions and Weaknesses\", \"comment\": \"1. **Significance of the Research Direction**\\n\\n We believe that our research direction is highly significant. Despite the rapid development of large language models (LLMs), there is still substantial room for improvement in complex reasoning tasks (e.g., mathematics, code, physics). An effective way to achieve this is by training models with large-scale, high-quality reasoning data. Due to the scarcity of high-quality data, synthetic data has become a popular method for constructing the necessary training datasets. However, existing methods for synthetic data generation face issues such as limited scalability, high costs, and high similarity to seed data. Our goal is to propose a scalable, cost-effective, and efficient method for synthetic data generation. Our experimental results demonstrate that our method is scalable, cost-effective, and significantly enhances model performance.\\n\\n Although our research method differs from traditional natural language processing (NLP) approaches, this does not detract from its academic value and practical significance. On the contrary, LLMs have introduced more perspectives and opportunities to the NLP field, reflecting the continuous progress of scientific research and technological development.\\n\\n We do not believe that a simple and effective method lacks innovation. In the current LLMs training context, high-quality data is very scarce, and being able to synthesize large-scale, high-quality data through a simple, effective method is a low-cost, high-yield endeavor.\\n\\n Many advanced works have already recognized the importance of synthetic data, such as MetaMath [1] (ICLR2024), ToRA [2] (ICLR2024), MathScale [3] (ICML2024), MammoTH2 [4] (NeurIPS2024), and MathCoder [5] (ICLR2024). Therefore, we believe that publishing our paper at this conference is appropriate.\\n\\n2. **Regarding the term \\u201cKnowledge Point\\u201d**\\n\\n The reason we use the term \\u201cKnowledge Point\\u201d in our paper is to align with previous methods that also use this term. However, your suggestion is very appreciated. We have added an explanation where \\u201cKnowledge Point\\u201d first appears in the revision and have improved the caption for Figure 2, both marked in red.\\n\\n3. **Concerning the term \\u201cdata\\u201d in the Title**\\n\\n - **3.1. Explanation of the Term \\\"Data\\\"** \\n The term \\u201cdata\\u201d in the title is intended to indicate multiple types of data. Our method is not limited to data from a single domain but can be applied to data from various fields, such as physics, chemistry, biology, and code. We have also stated in the introduction that complex reasoning encompasses multiple aspects, and we chose the most challenging mathematical reasoning tasks for our experiments.\\n\\n - **3.2. Title Modification** \\n Your suggestion is very valuable. To avoid ambiguity and to accurately convey our intent, we have decided to change the term \\u201cdata\\u201d in the title to \\u201creasoning instructions\\u201d, while also making minor adjustments to the abstract.\\n\\n4. **Regarding the Comparison of Model Performance**\\n\\n As for the issue of existing high-performance models, our experiment aims to prove the effectiveness of the synthetic data generated by our method, i.e., significantly enhancing the reasoning capabilities of LLMs. We only used GSDP-MATH for training, and the comparison models were also trained using synthetic data. Therefore, reviewers need to understand that our goal is to prove the effectiveness of our synthetic method, which has significant advantages over other synthetic methods.\\n\\n The high-performance models you mentioned were trained with a vast amount of rich mathematical data; thus, our models cannot be directly compared to these commercial-grade mathematical models. The key contribution of our paper lies in proposing an efficient data synthesis method for the industry, hoping to provide data support for other researchers in their LLM training endeavors.\\n\\n---\\n\\n[1] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhen\\u0002guo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023.\\n\\n[2] Gou, Z., Shao, Z., Gong, Y., Yang, Y., Huang, M., Duan, N., Chen, W., et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. arXiv preprint arXiv:2309.17452, 2023.\\n\\n[3] Zhengyang Tang, Xingxing Zhang, Benyou Wan, and Furu Wei. Mathscale: Scaling instruction tuning for mathematical reasoning. arXiv preprint arXiv:2403.02884, 2024.\\n\\n[4] Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu Chen. Mammoth2: Scaling instructions from the web. arXiv preprint arXiv:2405.03548, 2024b.\\n\\n[5] Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi\\nSong, Mingjie Zhan, and Hongsheng Li. Mathcoder: Seamless code integration in llms for enhanced mathematical reasoning. arXiv preprint arXiv:2310.03731, 2023.\"}", "{\"title\": \"Question 2: In the KP extraction process, did authors check the quality of the generated KPs and the impact of each filter and clustering operation to the quality of KPs.\", \"comment\": \"## Question 2: In the KP extraction process, did authors check the quality of the generated KPs and the impact of each filter and clustering operation to the quality of KPs\\n\\n---\\n\\nIn Section 2.2 of the paper, we provided an overview of the Knowledge Points (KPs) quality filtering process. Below, we offer a detailed explanation:\\n\\n\\n\\n### **Dual Filtering Process**\\n\\nEnsuring the quality of KPs is crucial, as using erroneous KPs can result in low-quality synthesized problems, while using overly similar KPs can lead to duplicated problems. These issues increase the computational and time costs for both problems synthesis and problems quality validation. As mentioned in the paper, we employ a dual filtering approach using both the embedding models and LLM to remove low-quality KPs, categorize them, and merge duplicates. The main steps are as follows:\\n\\n- **Eliminating Low-Quality KPs:**\\n \\n LLM are used to filter out KPs that are vague, contain mathematical errors, do not adhere to proper mathematical terminology, or are overly detailed. For instance, vague KPs can be too broad in meaning, failing to standardize the model's output effectively. Erroneous KPs may lead the model to synthesize incorrect questions, while overly detailed KPs can overly constrain the model's output.\\n \\n- **Categorization:**\\n \\n We first use an embedding model ([Bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)) to calculate pairwise similarity scores between KPs. KPs with similarity scores between **0.90 and 1.0** are deemed to have the same meaning, while those with scores between **0.70 and 0.90** undergo an additional check by the LLM to confirm if they are truly synonymous. KPs with scores below **0.70** are treated as distinct. Based on this process, KPs are grouped into classes, with similar KPs placed in the same class. These thresholds were determined through an analysis of the KP set.\\n \\n- **Summarization:**\\n \\n For each KP class, the LLM identifies the most representative KP to act as the class representative. If no existing KP in the class is suitable, the LLM synthesizes a new KP to represent the class.\\n \\n\\n\\n### **Impact of Dual Filtering**\\n\\n- **Impact on Knowledge Points:**\\n\\n - **De-duplication:**\\n \\n The LLM helps avoid incorrect grouping of distinct KPs that the embedding model may consider overly similar. For example:\\n \\n - *\\\"Geometric sequence\\\"* vs. *\\\"Arithmetic sequence\\\"* (similarity score: 0.805).\\n - *\\\"Sine function in trigonometry\\\"* vs. *\\\"Cosine function in trigonometry\\\"* (similarity score: 0.865). The LLM ensures such pairs are classified into different KP classes.\\n - **Exclusion of Low-Quality KPs:**\\n \\n The LLM effectively removes vague, mathematically incorrect, or excessively complex KPs. For example:\\n \\n - **Vague KPs:** Examples include *\\\"Problem-solving strategies\\\"* and *\\\"Mathematical techniques\\\"*.\\n - **Mathematically Incorrect KPs:** Examples include *\\\"The sum of the outer angles of a polygon depends on the number of sides\\\"*, *\\\"The matrix result of multiplying a matrix by its inverse is the matrix itself\\\"*, and *\\\"A series converges if its terms approach zero.\\\"*\\n - **Overly Detailed KPs:** These typically include highly specific problem statements, such as *\\\"Solving the quadratic equation $x^2 + 5x + 6 = 0$ by factoring\\u2026\\u2026\\\"*\\n\\n- **Impact on Final Results:**\\n\\n - When only the embedding model was used for de-duplication, the quality check revealed that only **26%** of the synthesized problems met the quality standard. After introducing dual filtering with the LLM, this proportion increased to **45%**. This demonstrates that the dual filtering process significantly improves dataset quality while reducing problem synthesis costs.\\n\\n\\n\\n### **Examples of Knowledge Points**\\nFinally, to demonstrate the diversity and comprehensiveness of our knowledge base, we randomly sampled 20 KPs:\\n\\n*\\\"Angle of Rotation\\\"*, *\\\"The unit circle and its properties\\\"*, *\\\"Solving Equations with Multiple Variables\\\"*, *\\\"Right triangles in a sphere\\\"*, *\\\"Inversions in permutations\\\"*, *\\\"Pi ($\\\\pi$) as a constant in geometry and trigonometry\\\"*, *\\\"Perfect Cubes\\\"*, *\\\"Area of Triangles and Squares\\\"*, *\\\"Diophantine Approximation\\\"*, *\\\"Perimeter of a triangle\\\"*, *\\\"Abundant Number\\\"*, *\\\"Graphing a hyperbola\\\"*, *\\\"Determining the base and height of a Parallelogram\\\"*, *\\\"Difference of cosines formula\\\"*, *\\\"Quartic Polynomial\\\"*, *\\\"Polynomial Inequalities\\\"*, *\\\"Congruence of Integers\\\"*, *\\\"Solving equations involving digits\\\"*, *\\\"Sign Analysis\\\"*, *\\\"Calculation of expected value for a fair eight-sided die\\\"*.\"}", "{\"summary\": \"This paper presents a novel approach for generating synthetic data related to mathematics and mathematical reasoning using a graph-based synthetic data pipeline (GSDP). It automatically extracts knowledge points from seed data to create a knowledge point relationship graph. Utilizing the MATH training set of 7,500 problems and answers as seeds, the GSDP-MATH dataset expands to over 1.91 million pairs of math problems and answers. The authors report achieving accuracies of 37.7% on the MATH dataset and 78.4% on GSM8K when tested with several 7B LLM models. However, the dataset and models are not available for verification.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The idea of extracting knowledge points from seed data, forming a graph to explore their relationships, and then generating training data from these compressed concepts is intriguing. It aligns well with the principles of autoencoders. I believe this is probably a good angle for presenting your approach.\", \"weaknesses\": \"While the idea is plausible, it requires validation. Overall, I'm uncertain how this approach compares to the recent work on Critical Plan Step Learning, which appears to demonstrate greater generalization capabilities and performs better on cross-domain tasks. In principle, once you generate 1.91 million training samples, you may lose some generalization power. Additionally, the paper contains several typos.\", \"questions\": \"In principle, does generating 1.91 million data samples result in a loss of generalization power?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"The Comparison Between Critical Plan Step Learning (CPL) Method Results and GSDP Model Results\", \"comment\": \"We added CPL-final to the table to compare with our models, and highlighted it in red.\\n\\n\\n*Table 1: Main results on scientific reasoning tasks. The \\u0394 rows highlight the improvements of the GSDP models over their corresponding baseline models.*\\n\\n| Model | ARC-C | MMLU-stem | GPQA | BBH | TheoremQA | MBPP | AVG |\\n|----------------------|-------|-----------|------|------|------------|--------------|------|\\n| Mistral-7B | 74.2 | 50.1 | 24.7 | 55.7 | 19.2 | 47.5 | 45.2 |\\n| LLaMA3-8B | 78.6 | 55.6 | 27.2 | _61.1_ | 20.1 | _54.9_ | 49.6 |\\n| Qwen1.5-7B | 75.6 | 45.5 | 26.7 | 45.2 | 14.2 | 52.1 | 43.2 |\\n| WizardMath-7B-V1.1 | 78.3 | 54.4 | 30.8 | 57.5 | 21.1 | 56.4 | 49.8 |\\n| MAmmoTH-7B-Mistral | 72.1 | 48.1 | 25.3 | 48.5 | **31.5** | 46.7 | 45.4 |\\n| MetaMath-Mistral-7B | 76.7 | 53.0 | 28.8 | 51.2 | 19.1 | 46.3 | 45.9 |\\n| MathScale-Mistral | 77.3 | 54.9 | **35.4** | 56.8 | 20.8 | 54.0 | 49.9 |\\n| `CPL-final` | `56.1` | `54.9` | `34.3` | `60.5` | `- ` | `-` | `-`|\\n| GSDP-Qwen-7B | _79.2_ | 56.3 | 29.8 | 50.3 | 21.6 | 52.5 | 48.3 |\\n| `\\u0394 Qwen1.5-7B` | `+3.6` | `+10.8` | `+3.1` | `+5.1` | `+7.4` | `+0.4` | `+5.1` |\\n| GSDP-7B | 78.8 | _58.3_ | _32.3_ | 60.3 | _25.6_ | 54.8 | _51.7_ |\\n| `\\u0394 Mistral-7B` | `+4.6` | `+8.2` | `+7.6` | `+4.6` | `+6.4` | `+7.3` | `+6.5` |\\n| GSDP-8B | **80.5** | **60.8** | 30.8 | **63.7** | 24.2 | **58.4** | **53.1** |\\n| `\\u0394 LLaMA3-8B` | `+1.9` | `+5.2` | `+3.6` | `+2.6` | `+4.1` | `+3.5` | `+3.5` |\"}", "{\"title\": \"Thanks for the clarifications\", \"comment\": \"Generating synthetic data is of course highly motivated for many kinds of tasks.\\nThere are some steps in your proposal, like screening KPs for quality, that still appear largely or fully manual.\\nHave you accounted for the cognitive workload of KP screening compared to conventional ways of generating data, or ways of helping programs synthesize data for augmentation?\\nOverall, based on your inputs, I can increase my score to weak reject, but I cannot champion the paper.\\nIt seems a bit preliminary, with various loose ends still left to tie up.\\nBut I appreciate your effort with this paper!\"}", "{\"comment\": \"Dear reviewers,\\n\\nWe have carefully addressed your suggestions and questions, updated the paper, and submitted the revised version. The main changes are highlighted in different colors for your convenience. We kindly request you to review the revision.\\n\\nAs the deadline is approaching, we hope you can let us know if there are any further issues so that we can address them promptly.\\n\\nThank you very much!\"}", "{\"title\": \"Key Improvements in the Rebuttal\", \"comment\": \"Based on the valuable suggestions and questions from the three reviewers (**we sincerely appreciate the support of coVk and dEUx for our work**), we have conducted additional experiments and included them in the revised Appendix and main text. Below is a summary of the key changes:\\n\\n**(1) Out-of-Domain Reasoning Ability Test**\\n\\nIn **Section 3.6** and **Appendix C**, we tested the out-of-domain reasoning ability of the GSDP model. We selected multiple test sets from various fields (such as ARC-C, MMLU-STEM, GPQA, BBH, TheoremQA, and MBPP) to evaluate the model's reasoning capability in physics, chemistry, biology, and computer science (We call them scientific reasoning). Although the model was trained only on GSDP-MATH, it showed an average improvement of over **5%** (with a maximum improvement of **10.8%**) in scientific reasoning ability tests, indicating that GSDP-MATH not only significantly enhances the model's mathematical reasoning ability but also boosts its out-of-domain reasoning skills. **Table 5 in the revision** and **Table 1 below this comment** show the improvement of our model on scientific reasoning tasks and its comparison with other models (**including the CPL method requested by Reviewer 867H. Additionally, as the deadline is approaching, we hope Reviewer 867H can provide feedback so that we can respond as soon as possible**).\\n\\n---\\n*Table 1: Main results on scientific reasoning tasks. The \\u0394 rows highlight the improvements of the GSDP models over their corresponding baseline models.*\\n\\n| Model | ARC-C | MMLU-stem | GPQA | BBH | TheoremQA | MBPP | AVG |\\n|----------------------|-------|-----------|------|------|------------|--------------|------|\\n| Mistral-7B | 74.2 | 50.1 | 24.7 | 55.7 | 19.2 | 47.5 | 45.2 |\\n| LLaMA3-8B | 78.6 | 55.6 | 27.2 | _61.1_ | 20.1 | _54.9_ | 49.6 |\\n| Qwen1.5-7B | 75.6 | 45.5 | 26.7 | 45.2 | 14.2 | 52.1 | 43.2 |\\n| WizardMath-7B-V1.1 | 78.3 | 54.4 | 30.8 | 57.5 | 21.1 | 56.4 | 49.8 |\\n| MAmmoTH-7B-Mistral | 72.1 | 48.1 | 25.3 | 48.5 | **31.5** | 46.7 | 45.4 |\\n| MetaMath-Mistral-7B | 76.7 | 53.0 | 28.8 | 51.2 | 19.1 | 46.3 | 45.9 |\\n| MathScale-Mistral | 77.3 | 54.9 | **35.4** | 56.8 | 20.8 | 54.0 | 49.9 |\\n| CPL-final | 56.1 | 54.9 | 34.3 | 60.5 | - | - | -|\\n| GSDP-Qwen-7B | _79.2_ | 56.3 | 29.8 | 50.3 | 21.6 | 52.5 | 48.3 |\\n| `\\u0394 Qwen1.5-7B` | `+3.6` | `+10.8` | `+3.1` | `+5.1` | `+7.4` | `+0.4` | `+5.1` |\\n| GSDP-7B | 78.8 | _58.3_ | _32.3_ | 60.3 | _25.6_ | 54.8 | _51.7_ |\\n| `\\u0394 Mistral-7B` | `+4.6` | `+8.2` | `+7.6` | `+4.6` | `+6.4` | `+7.3` | `+6.5` |\\n| GSDP-8B | **80.5** | **60.8** | 30.8 | **63.7** | 24.2 | **58.4** | **53.1** |\\n| `\\u0394 LLaMA3-8B` | `+1.9` | `+5.2` | `+3.6` | `+2.6` | `+4.1` | `+3.5` | `+3.5` |\\n---\\n\\n**(2) Quantitative Analysis of the Dataset**\\n\\nIn **Appendix E**, we conducted a quantitative analysis of the dataset's \\\"seed similarity\\\" and \\\"data diversity.\\\" The results in **Figure 5** and **Table 6** of the **revision** show that GSDP-MATH has lower seed similarity and higher data diversity.\\n\\n**(3) Other Modifications**\\n\\nIn the revision, we added a case study to better illustrate the GSDP workflow and its advantages over other methods. Additionally, we made it clearer in the introduction the importance of KPRG and how the Graph-based method addresses the issues of limited scalability, high cost, and similarity to seed data in existing methods. (**Appendix D.2, D.3, Figure 3 in the revision**)\"}", "{\"metareview\": \"This paper proposes a Graph-based Synthetic Data Pipeline (GSDP) for generating synthetic data to improve mathematical reasoning capabilities in language models. The framework extracts knowledge points (KPs) from seed mathematical problems and creates a knowledge graph representing relationships between these concepts. Using the MATH training set (7,500 problems) as seeds, GSDP generated a dataset of 1.91 million problem-answer pairs (GSDP-MATH). When tested with 7B parameter LLM models, the approach achieved 37.7% accuracy on the MATH dataset and 78.4% accuracy on GSM8K. The paper claims to offer a more scalable and cost-effective approach compared to existing methods, producing synthesis quality comparable to GPT-4-0613.\\n\\nThe paper addresses an important challenge in the field - the scarcity of training data for mathematical reasoning tasks. The proposed approach of using interconnected knowledge points for problem synthesis is novel and theoretically well-motivated, drawing parallels to autoencoder principles. The method demonstrates the ability to boost the performance of smaller language models, potentially making mathematical reasoning more accessible with less computational resources. The experimental results show promising performance improvements, particularly considering the model size constraints.\\n\\nHowever, the approach heavily relies on LLM-based workflows with prompt engineering, rather than introducing fundamental algorithmic innovations. The quality and impact of the KP extraction process and filtering operations are not thoroughly validated, and there is a lack of quantitative investigation into dataset diversity and comparison with other synthetic methods. The dataset and models are not available for verification, missing baseline results for important models (Qwen and Llama-3), and performance still lags behind state-of-the-art results on benchmark datasets. Additionally, there is insufficient motivation for why a graph-based approach addresses the stated limitations, lack of detailed case studies demonstrating the KP graph's effectiveness, use of non-standard terminology (\\\"knowledge point\\\") without precise definition, and several typos noted in the manuscript.\\n\\nBased on the reviews and analysis, rejection is recommended for this paper. While it presents an interesting approach, the heavy reliance on LLM-based workflows with minimal algorithmic innovation raises concerns about the fundamental contribution to the field. The lack of rigorous validation of key components (KP extraction, filtering, dataset diversity) and missing baseline comparisons makes it difficult to fully assess the method's effectiveness. Although the performance shows promise for smaller models, it does not advance the state-of-the-art, and the inability to verify results due to unavailable data and models is concerning. The presentation and motivation issues suggest the work would benefit from substantial revision and additional experimental validation before being ready for publication. While the paper shows promise and addresses an important problem, it would benefit from addressing these limitations in a revised version, including more thorough empirical validation and clearer positioning of the technical contribution beyond LLM-based workflows.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised several concerns about the paper, which the authors attempted to address through their responses. R1 expressed concerns about the paper primarily presenting an LLM-based workflow rather than fundamental innovation, though the authors explained their manual screening process for KPs. While R1 acknowledged some improvements, they maintained that the work appeared preliminary.\\n\\nR2 questioned the effectiveness of the approach compared to Critical Plan Step Learning (CPL). The authors responded by highlighting three main advantages of their method: lower implementation and training complexity, better scalability with 255x data expansion, and broader generalizability across various LLM architectures.\\n\\nR3 initially had concerns about motivation clarity, KP extraction quality control, dataset diversity, and missing baseline results. The authors' responses satisfied R3, leading to an improved rating. Following the rebuttal, R1 maintained a \\\"weak reject\\\" position while acknowledging the value of the clarifications. R2 did not explicitly respond to the authors' rebuttal, while R3 increased their score based on the responses.\\n\\nThe authors successfully provided clear explanations of their method's advantages and improved technical reproducibility while addressing concerns about model generalizability. However, significant concerns remained, including limited methodological innovation, the necessity of manual intervention in the workflow, and the preliminary nature of the work as noted by R1. The lack of response from R2 made it difficult to assess if their concerns were adequately addressed.\\n\\nAlthough the rebuttal period provided valuable clarifications, it did not fully address core concerns about the paper's fundamental contribution and methodological innovation. Despite some reviewers becoming more positive after the rebuttal, the remaining concerns about the work's preliminary nature and reliance on LLM-based workflows supported maintaining the original rejection recommendation.\"}", "{\"comment\": \"Dear reviewers,\\n\\nWe have carefully addressed your suggestions and questions, updated the paper, and submitted the revised version. The main changes are highlighted in different colors for your convenience. We kindly request you to review the revision.\\n\\nAs the deadline is approaching, we hope you can let us know if there are any further issues so that we can address them promptly.\\n\\nThank you very much!\"}", "{\"title\": \"Question 1&3\", \"comment\": \"- **Supplementary Explanation on How the Graph-Based Synthesis Method Addresses These Limitations:**\\n \\n The highlight of the graph-based synthesis method is its ability to explore both explicit and implicit relationships among knowledge points, addressing several key limitations:\\n \\n - **High Expansion Ratio:**\\n \\n Previous methods either relied entirely on seed data or focused solely on explicit relationships among knowledge points (co-occurrence knowledge points). However, since the seed data and the explicit relationships within it are inherently limited, these methods struggle to generate a large volume of synthesized data. In contrast, our method leverages the Knowledge Points Relationship Graph (KPRG) to uncover numerous new and reasonable knowledge point combinations by exploring implicit relationships. This allows us to achieve a significantly higher expansion ratio.\\n \\n - **Low Seed Similarity:**\\n \\n Methods that rely solely on seed data or explicit relationships typically generate variants of seed data problems, leading to a high degree of similarity with the seed data, as explicit relationships are already present in the seed data. Our method, however, synthesizes data by utilizing both explicit relationships and novel implicit relationships that do not appear in the seed data. This approach results in a diverse set of synthesized data with significantly lower similarity to the seed data.\"}", "{\"title\": \"Question 5\", \"comment\": \"This is a very good suggestion. Figure 2 in the paper shows a case of synthesized data, and Figure 3 illustrates the specific process of how the KPRG works but does not provide a specific case. Therefore, we have revised Figure 3 (see revision Figure 3) in the revision to include a specific case to describe how explicit and implicit knowledge point combinations are derived.\\n\\nAssuming our seed data contains 4 questions, each with 2 to 3 extracted knowledge points:\\n\\n#### Knowledge Base of Seed Data\\n```text\\nQuestion 1 KPs\\n- (A) \\u201cAngle of Rotation\\u201d\\n- (B) \\u201cThe unit circle and its properties\\u201d\\n- (C) \\u201cTrigonometric identities\\u201d\\n\\nQuestion 2 KPs\\n- (A) \\u201cAngle of Rotation\\u201d\\n- (D) \\u201cProperties of Rotational Symmetry\\u201d\\n\\nQuestion 3 KPs\\n- (D) \\u201cProperties of Rotational Symmetry\\u201d\\n- (E) \\u201c\\ud835\\udf0b as a constant in geometry and trigonometry\\u201d\\n\\nQuestion 4 KPs\\n- (E) \\u201c\\ud835\\udf0b as a constant in geometry and trigonometry\\u201d\\n- (F) \\u201cEuler's formula\\u201d\\n```\\n\\n\\n\\nWe can construct the KPRG in the middle of Figure 3 from the co-occurrence of KPs. According to our definition, we then identify the core knowledge points and the combinations of knowledge points with one-hop, two-hop, three-hop, and community relationships:\\n\\n#### Knowledge Point Combinations\\n```text\\n- Core knowledge point: A\\n- One-hop: A-B, A-C, B-C, A-D, D-E, E-F\\n- Two-hop: A-E, B-D, C-D, D-F\\n- Three-hop: A-F\\n- Community: A-B-C\\n```\\n\\nWe can see that implicit knowledge point combinations identified through KPRG can also be used to synthesize high-quality problems that are not present in the seed data. For example:\\n```text\", \"a_e\": \"The combination of \\\"Angle of Rotation\\\" and \\\"Pi (\\u03c0)\\\" can be used to construct problems involving the calculation of angles in radians and the application of rotational transformations. An example problem might require students to convert angles from degrees to radians and perform rotations on a coordinate plane.\", \"c_d\": \"The combination of \\\"Trigonometric identities\\\" and \\\"Properties of Rotational Symmetry\\\" can be used to create problems that involve proving or utilizing trigonometric identities within the context of rotational symmetry. For instance, a problem might ask students to demonstrate how certain trigonometric identities hold true under rotational transformations.\", \"a_f\": \"The combination of \\\"Angle of Rotation\\\" and \\\"Euler's formula\\\" allows for the construction of problems that connect angular rotations to complex exponential functions. An example could involve students using Euler's formula to represent a rotation in the complex plane and interpret the geometric implications of the formula.\\n\\n...\\n```\\n\\nWe use all the identified knowledge point combinations and the following prompt as input, allowing the mathematical model to synthesize new problems. After quality checks, we obtain the GSDP-MATH.\\n\\n#### Prompt for Knowledge Points Extraction\\n```text\\nYou are a math teacher. Now, you need to help your students learn the following math knowledge points. Using these knowledge points as guidelines, please construct a new, original math problem that requires an understanding and application of all these points.\", \"ensure_the_following\": \"1. The constructed problem must be free from any mathematical logic errors.\\n2. The problem must combine all the knowledge points.\\n3. The question should be of sufficient difficulty and logically consistent.\", \"knowledge_points_1\": \"{knowledge points1}\", \"knowledge_points_2\": \"{knowledge points2}\\n[knowledge points 3: {knowledge points3}]\", \"please_format_your_response_like_this\": \"\", \"new_problem\": \"{Your new problem here}\", \"reason\": \"{Your explanation here}\\n```\"}", "{\"title\": \"Request for Your Response\", \"comment\": \"Dear reviewer,\\n\\nWe have carefully addressed your suggestions and questions, updated the paper, and submitted the revised version. The main changes are highlighted in different colors for your convenience. We kindly request you to review the revision.\\n\\nAs the deadline is approaching, we hope you can let us know if there are any further issues so that we can address them promptly.\\n\\nThank you very much!\"}", "{\"comment\": \"Dear reviewers,\\n\\nWe have carefully addressed your suggestions and questions, updated the paper, and submitted the revised version. The main changes are highlighted in different colors for your convenience. We kindly request you to review the revision.\\n\\nAs the deadline is approaching, we hope you can let us know if there are any further issues so that we can address them promptly.\\n\\nThank you very much!\"}", "{\"summary\": \"Summary:\\n\\nTraining data comes at a premium when training generative language models to solve math (word) problems. Semi-synthetic instance augmentation is commonly used in such cases. This submission proposes math problem instance augmentation using \\\"knowledge points\\\". I wasn't familiar with this term, but, going by the examples given, a \\\"knowledge point\\\" (KP) looks like a glossary entry of a mathematical term or concept like \\\"Pythagorus theorem\\\" or \\\"completing the square\\\". These KPs can be connected by edges, based on whether/how they are related. The proposed method samples small compact subgraphs of this KP-graph, and submits these to an LLM again to generate math problems that requires familiarity and expertise over these KPs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Strengths:\", \"Identifies data scarcity problem in training LLMs to solve math (word) problems (but this is not unknown).\", \"Proposes selection of interconnected knowledge points as a way to synthesize high-quality, diverse math problems.\", \"Demonstrates that this form of data synthesis can be used to boost the performance of smaller language models.\"], \"weaknesses\": [\"Weaknesses:\", \"As in an increasing number of LLM-based papers, the \\\"innovation\\\" consists of creating a workflow and taking each task to an LLM with a suitably designed prompt. E.g., a prompt is proposed for knowledge point (KP) extraction from a (problem, solution) pair. There is very lightweight processing (based on cooccurrence) for graph formation among the KPs and the selection of small compact subgraphs from the KP-graph. Then there is a prompt that sends this KP subset into an LLM again, asking it to generate a math problem involving those KPs. Since LLMs are now basically wish-fulfillment machines, such LLM-based papers are incomparable to earlier NLP papers published at such venues.\", \"Perhaps as a partial consequence, the style of writing is quite foreign to students of pre-LLM ML and AI communities. \\\"Chain of thought\\\" is itself a good example of how not to describe a \\\"data structure\\\" to a computer scientist, \\\"knowledge point\\\" perpetuates that style.\", \"We are not necessarily playing a number game here, but [https://paperswithcode.com/sota/math-word-problem-solving-on-math] lists the best MATH performance as 88.1 using a 72B LLM and 87.92 using GPT4-turbo. In comparison the submission claims GPT-4-0613 is at 42.5 and gets it to 37.7 using a 7B LLM. [https://paperswithcode.com/sota/arithmetic-reasoning-on-gsm8k] lists the best GSM8K performance as 96.7 using a 72B LLM and 96.4 using a 7B LLM. In comparison the submission shows GPT-4-0613 at 92 and gets it to 76.5 using a 8B model. I have no experience in how to give credit for lower performance using smaller models.\"], \"questions\": \"Comments and suggestions:\\n\\nThe title can be greatly improved. \\\"Data\\\" is too generic. If you are focusing on math word problems, say so in the title itself.\\n\\nFig 2 caption should be much longer and explain the point of the diagrams without depending on distant text.\\n\\n\\\"Knowledge point\\\" is too non-standard and nebulous. Please be specific and precise right the first time you use this term and define it in terms of bits and bytes.\\n\\nOverall, this may be better suited to a NLP or AI conference than an ML conference.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response! I appreciate the authors' efforts in addressing my concerns and will raise my score accordingly.\"}", "{\"title\": \"Acknowledgement\", \"comment\": \"Finally, we would like to express our gratitude for your valuable suggestions. Based on your feedback, we have made revisions in the updated version (including the introduction, Table 1, Figure 3, etc.), which are marked in blue. Additionally, we have added Appendices D and E for case studies, KP quality assessment, and quantitative experiments on the dataset.\\n\\nIf you have any further questions or concerns, please feel free to let us know.\"}", "{\"title\": \"Many thanks for actively engaging in the rebuttal process\", \"comment\": \"Your clarifications and elaboration of the workflow will make the writeup much better, and the results potentially reproducible by readers. Based on my understanding of the creativity of the harnessing LLMs to such data augmentation, I would like to hold the overall rating at the same level. All the best!\"}", "{\"title\": \"Key Improvements in the Rebuttal (The deadline is approaching, and we are eagerly looking forward to your response.)\", \"comment\": \"**Key Improvements in the Rebuttal**\\n\\n**(The deadline is approaching, and we are eagerly looking forward to your response.)**\\n\\nBased on the valuable suggestions and questions from the three reviewers (coVk and dEUx supported our work), we have conducted additional experiments and included them in the revised Appendix and main text. Below is a summary of the key changes:\\n\\n**(1) Out-of-Domain Reasoning Ability Test**\\n\\nIn **Section 3.6** and **Appendix C**, we tested the out-of-domain reasoning ability of the GSDP model. We selected multiple test sets from various fields (such as ARC-C, MMLU-STEM, GPQA, BBH, TheoremQA, and MBPP) to evaluate the model's reasoning capability in physics, chemistry, biology, and computer science (We call them scientific reasoning). Although the model was trained only on GSDP-MATH, it showed an average improvement of over **5%** (with a maximum improvement of **10.8%**) in scientific reasoning ability tests, indicating that GSDP-MATH not only significantly enhances the model's mathematical reasoning ability but also boosts its out-of-domain reasoning skills.\\n\\n**(2) Quantitative Analysis of the Dataset**\\n\\nIn **Appendix E**, we conducted a quantitative analysis of the dataset's \\\"seed similarity\\\" and \\\"data diversity.\\\" The results in Figure 5 and Table 6 of the revision show that GSDP-MATH has lower seed similarity and higher data diversity.\\n\\n**(3) Other Modifications**\\n\\nIn the revision, we added a case study to better illustrate the GSDP workflow and its advantages over other methods. Additionally, we made it clearer in the introduction the importance of KPRG and how the Graph-based method addresses the issues of limited scalability, high cost, and similarity to seed data in existing methods. (**Appendix D.2, D.3, Figure 3**)\\n\\n### **If the above clarifications have addressed your concerns, we hope you might consider supporting our paper. We would greatly appreciate your positive feedback on our work. Looking forward to your reply!**\"}", "{\"title\": \"Acknowledgement\", \"comment\": \"Finally, we would like to express our gratitude for your valuable suggestions. Based on your feedback, we have made revisions in the updated version (including the introduction and Figure 1), which are marked in red. Additionally, we have added supplementary experiments in the appendix, such as the testing of model generalization ability, case studies, KP quality assessment, and quantitative experiments on the dataset.\\n\\nIf you have any further questions or concerns, please feel free to let us know.\"}", "{\"title\": \"Out-of-Domain Reasoning Ability Test\", \"comment\": \"As shown in the Table 1, although our model was trained using only GSDP-MATH, it can be observed that GSDP-Qwen-7B, GSDP-7B and GSDP-8B show average improvements of 6.5\\\\%, 3.5\\\\%, and 5.1\\\\% respectively in scientific reasoning tasks. And it is highly competitive on multiple benchmarks compared to other mathematical models. Please refer to Appendix C of the revision for a more detailed table.\\n\\n\\n*Table 1: Main results on scientific reasoning tasks. The \\u0394 rows highlight the improvements of the GSDP models over their corresponding baseline models.*\\n\\n| Model | ARC-C | MMLU-stem | GPQA | BBH | TheoremQA | MBPP | AVG |\\n|----------------------|-------|-----------|------|------|------------|--------------|------|\\n| Mistral-7B | 74.2 | 50.1 | 24.7 | 55.7 | 19.2 | 47.5 | 45.2 |\\n| LLaMA3-8B | 78.6 | 55.6 | 27.2 | _61.1_ | 20.1 | _54.9_ | 49.6 |\\n| Qwen1.5-7B | 75.6 | 45.5 | 26.7 | 45.2 | 14.2 | 52.1 | 43.2 |\\n| WizardMath-7B-V1.1 | 78.3 | 54.4 | 30.8 | 57.5 | 21.1 | 56.4 | 49.8 |\\n| MAmmoTH-7B-Mistral | 72.1 | 48.1 | 25.3 | 48.5 | **31.5** | 46.7 | 45.4 |\\n| MetaMath-Mistral-7B | 76.7 | 53.0 | 28.8 | 51.2 | 19.1 | 46.3 | 45.9 |\\n| MathScale-Mistral | 77.3 | 54.9 | **35.4** | 56.8 | 20.8 | 54.0 | 49.9 |\\n| GSDP-Qwen-7B | _79.2_ | 56.3 | 29.8 | 50.3 | 21.6 | 52.5 | 48.3 |\\n| `\\u0394 Qwen1.5-7B` | `+3.6` | `+10.8` | `+3.1` | `+5.1` | `+7.4` | `+0.4` | `+5.1` |\\n| GSDP-7B | 78.8 | _58.3_ | _32.3_ | 60.3 | _25.6_ | 54.8 | _51.7_ |\\n| `\\u0394 Mistral-7B` | `+4.6` | `+8.2` | `+7.6` | `+4.6` | `+6.4` | `+7.3` | `+6.5` |\\n| GSDP-8B | **80.5** | **60.8** | 30.8 | **63.7** | 24.2 | **58.4** | **53.1** |\\n| `\\u0394 LLaMA3-8B` | `+1.9` | `+5.2` | `+3.6` | `+2.6` | `+4.1` | `+3.5` | `+3.5` |\"}", "{\"title\": \"Thank you for raising the score and for your support.\", \"comment\": \"Thank you very much for your positive feedback and for taking the time to review our paper. We greatly appreciate your valuable comments and suggestions that have helped to improve our work. We are pleased to hear that our responses have satisfactorily addressed your concerns.\\n\\nThank you for raising the score and for your support.\"}", "{\"title\": \"Table Update\", \"comment\": \"To better showcase the advantages of the GSDP-Model, we have updated the table. Please refer to Appendix C of the revision for a more detailed table.\\n\\n---\\nAs shown in the Table 1, although our model was trained using only GSDP-MATH, it can be observed that GSDP-Qwen-7B, GSDP-7B and GSDP-8B show average improvements of 6.5\\\\%, 3.5\\\\%, and 5.1\\\\% respectively in scientific reasoning tasks. And it is highly competitive on multiple benchmarks compared to other mathematical models.\\n\\n\\n*Table 1: Main results on scientific reasoning tasks. The \\u0394 rows highlight the improvements of the GSDP models over their corresponding baseline models.*\\n\\n| Model | ARC-C | MMLU-stem | GPQA | BBH | TheoremQA | MBPP | AVG |\\n|----------------------|-------|-----------|------|------|------------|--------------|------|\\n| Mistral-7B | 74.2 | 50.1 | 24.7 | 55.7 | 19.2 | 47.5 | 45.2 |\\n| LLaMA3-8B | 78.6 | 55.6 | 27.2 | _61.1_ | 20.1 | _54.9_ | 49.6 |\\n| Qwen1.5-7B | 75.6 | 45.5 | 26.7 | 45.2 | 14.2 | 52.1 | 43.2 |\\n| WizardMath-7B-V1.1 | 78.3 | 54.4 | 30.8 | 57.5 | 21.1 | 56.4 | 49.8 |\\n| MAmmoTH-7B-Mistral | 72.1 | 48.1 | 25.3 | 48.5 | **31.5** | 46.7 | 45.4 |\\n| MetaMath-Mistral-7B | 76.7 | 53.0 | 28.8 | 51.2 | 19.1 | 46.3 | 45.9 |\\n| MathScale-Mistral | 77.3 | 54.9 | **35.4** | 56.8 | 20.8 | 54.0 | 49.9 |\\n| GSDP-Qwen-7B | _79.2_ | 56.3 | 29.8 | 50.3 | 21.6 | 52.5 | 48.3 |\\n| `\\u0394 Qwen1.5-7B` | `+3.6` | `+10.8` | `+3.1` | `+5.1` | `+7.4` | `+0.4` | `+5.1` |\\n| GSDP-7B | 78.8 | _58.3_ | _32.3_ | 60.3 | _25.6_ | 54.8 | _51.7_ |\\n| `\\u0394 Mistral-7B` | `+4.6` | `+8.2` | `+7.6` | `+4.6` | `+6.4` | `+7.3` | `+6.5` |\\n| GSDP-8B | **80.5** | **60.8** | 30.8 | **63.7** | 24.2 | **58.4** | **53.1** |\\n| `\\u0394 LLaMA3-8B` | `+1.9` | `+5.2` | `+3.6` | `+2.6` | `+4.1` | `+3.5` | `+3.5` |\"}" ] }
CD2wgg9RQD
InfCycle: Learning to Use Tools via Inference Compute and Cycle Consistency
[ "xiaobo liang", "wenjing Xie", "Juntao Li", "Wanfu Wang", "Yibin Chen", "Min Zhang" ]
The scaling of inference-time computation in large language models (LLMs) has emerged as a promising approach for enhancing reasoning capabilities by trading off inference-time and pre-training compute. The practice of how to enable LLMs to utilize additional computation at test time to improve response accuracy is crucial for both academia and industry. \textit{Proposer-Verifier}, as a typical paradigm of inference scaling, often fails to generalize to various scenarios. Specifically, in tool use tasks, LLMs face the risk of lacking effective verifiers, leading to error accumulation in multiple reasoning steps. In this work, we address these challenges by introducing \textbf{InfCycle}, a multi-stage data synthesis strategy that employs LLMs as data synthesis and employs cycle consistency verification to ensure high-quality trajectory generation. This approach utilizes step-wise cycle consistency among synthesized trajectories for a given tool, providing effective process supervision that has advantages over outcome supervision. Extensive experiments on multiple tool-use and reasoning tasks demonstrate that InfCycle efficiently enables self-improvement. It outperforms state-of-the-art baselines on StableToolBench, achieving a 75.4\% pass rate and a 79.6\% win rate using small size models (7B), without relying on external supervision or expert trajectories for warm-up.
[ "LLM", "Tool use", "Inference Scaling", "Cycle Consistency", "Self-improve" ]
https://openreview.net/pdf?id=CD2wgg9RQD
https://openreview.net/forum?id=CD2wgg9RQD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sJ92kjrGTQ", "owLwQFb8hm", "K4wKgln30H", "D2MNgXlNGN", "7omoKDgfCA" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730717604865, 1730710104048, 1730694256589, 1732712955011, 1730700658135 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9930/Reviewer_W1Qa" ], [ "ICLR.cc/2025/Conference/Submission9930/Reviewer_Ew6c" ], [ "ICLR.cc/2025/Conference/Submission9930/Reviewer_1Fb2" ], [ "ICLR.cc/2025/Conference/Submission9930/Authors" ], [ "ICLR.cc/2025/Conference/Submission9930/Reviewer_qee4" ] ], "structured_content_str": [ "{\"summary\": \"This paper addresses the challenge of enhancing the reasoning abilities of large language models (LLMs) in integrating inference-time external tool invocations. Using a Proposer-Verifier approach, where LLMs receive automatic or human feedback on either the outcome or the sequential generation process, the paper introduces an iterative framework called InfCycle. InfCycle draws inspiration from the concept of cycle consistency in transformation tasks, where consistency implies that composing a transformation with its inverse keeps the output close to the identity map. Experimental results demonstrate that the proposed approach outperforms state-of-the-art baselines without relying on external supervision.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper tackles a timely and relevant problem of improving LLM reasoning capabilities. The InfCycle framework is conceptually sound, and its key finding\\u2014that process verifiers are more effective than outcome verifiers\\u2014aligns with similar results in the literature. The experimental benchmarks (StableToolBranch and BFCL) and baselines are well-chosen and relevant to the problem. The results of the proposed framework, presented in Tables 2 and 3, are compelling across this comprehensive experimental setup.\", \"weaknesses\": \"The theoretical and conceptual novelty of the proposed framework is not sufficiently demonstrated in the paper. The main idea behind InfCycle and its adaptation of cycle consistency is not clear. In particular, the analogy to cycle consistency in this context is ambiguous, as the feedback from the simulator to the generator differs in type from the generator's input. Moreover, the lack of an overview example of the approach makes it difficult to understand specific contributions, such as the use of the A* algorithm and direct preference optimization.\", \"questions\": \"1. Could you clarify the concept behind InfCycle and its relation to cycle consistency with an example?\\n2. Could you also provide details on the application of Preference Learning and the A* algorithm in the paper using a simple example?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces InfCycle, a multi-stage data synthesis strategy for LLMs to improve reasoning and tool use through self-improvement without external supervision. It builds on the paradigm of proposer-verifier for inference scaling. InfCycle utilizes cycle consistency for verifying intermediate steps, reducing error accumulation and enhancing data sampling efficiency. It incorporates a Generator and Simulator for generating training data and applies the A* search algorithm and preference learning to boost performance. Experiments on benchmarks demonstrate significant improvements, with Qwen2.5-7B outperforming GPT-4, achieving a ~75 pass rate and a ~79 win rate.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"leveraging additional computation at test time to enhance tool use accuracy, enabling LLMs to improve without additional supervision\", \"proposing a multi-stage approach using a Proposer and cycle consistency as a Verifier to boost data sampling and performance\", \"showing significant performance gains, including Qwen2.5-7B surpassing GPT-4 on StableToolbench, with a 75.4% pass rate and a 79.6% win rate, and on Berkeley Function Calling Benchmark where MetaLLaMA3-8B achieve an improvement of over 16.31 points.\", \"analysis results show that process verifiers are more effective than solely outcome verifiers in enhancing long-range reasoning capabilities and in identifying high-quality execution trajectories.\", \"multistep tool invocation is considered comprising planning, tool selection, execution, tool reflection and task reflection.\"], \"weaknesses\": \"The experimental results are rather mixed. Why are the two benchmarks treated asymmetrically with respect to baseline and comparison models. Why not include results from GPT4-turbo. GTP4o-mini for both cases? Similarly, InfCycle uses 3 different models in first case (Table 2 and 3) but only 2 different models in second (Table 4).\", \"questions\": \"Could the authors explain the concerns about experimental evaluation in the weakness section?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents InfCycle, a framework aimed at improving Large Language Models' (LLMs) tool-usage capabilities for complex reasoning tasks. InfCycle leverages inference-time compute scaling, which boosts solution quality by increasing sampling during inference, and introduces cycle consistency as a process verification mechanism to ensure accurate data synthesis for model training. Key challenges addressed by InfCycle include the limitations of traditional methods like Proposer-Verifier, which often lack reliable verification for tool-based, multi-step reasoning tasks, and the difficulty in obtaining high-quality, large-scale training data in tool-use scenarios. InfCycle tackles these issues through three stages: (1) a Data Synthesizer Pipeline that collects real-world APIs, categorizes them, and generates user queries validated through LLM-based semantic checks; (2) Step-wise Cycle Consistency, which verifies the consistency of execution trajectories generated by the data synthesis steps to ensure logical coherence and semantic accuracy; and (3) a Multi-Stage Synthesis Strategy that uses A* search for efficient sampling, Direct Preference Optimization (DPO) to improve solution quality via pairwise comparisons, and iterative refinement to handle increasingly complex scenarios. Experiment results appear to show that cycle consistency as a verification mechanism overcomes the limitations of some existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"InfCycle allows LLMs to learn tool use independently, eliminating the need for external supervision or expert demonstrations.\", \"The cycle consistency mechanism provides a reliable verification process, overcoming traditional Proposer-Verifier limitations for tool use tasks.\", \"Through a multi-stage synthesis strategy with A* search and preference learning, InfCycle improves the model\\u2019s reasoning skills, particularly in executing multi-step tool actions and mitigating error accumulation.\", \"Evaluations on StableToolBench and the Berkeley Function-Calling benchmark show that InfCycle outperforms state-of-the-art baselines, achieving high pass and win rates\\u2014even with smaller model sizes.\"], \"weaknesses\": [\"Although the authors highlight the efficiency of the data synthesis pipeline, InfCycle\\u2019s iterative multi-stage approach\\u2014combining synthesis, A* search, and preference learning\\u2014could still lead to substantial computational demands. The paper doesn\\u2019t analyze the computational costs associated with data generation in depth, which is crucial for assessing feasibility, especially in resource-constrained settings.\", \"The data synthesis process may also fall short in simulating the complexity of real-world tool use, where unpredictable errors or changing user needs often arise.\", \"Pass Rate and Win Rate are the main metrics used for evaluating model performance. However, these metrics may not fully reflect the quality of tool use, such as efficiency measured by the number of APIs called.\", \"The paper\\u2019s novelty is somewhat limited, as its multi-stage synthesis strategy appears similar to Chain of Preference Optimization (CPO) [1] , which fine-tunes LLMs to align each step of the chain of thoughts reasoning paths with those of tree of thoughts using the inherent preference information in the tree-search process. Furthermore, the step-wise cycle consistency mechanism overlaps with Step-DPO [2], which also treats each reasoning step as a unit for preference optimization.\", \"[1] Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs. NeurIPS'24.\", \"[2] Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs.\"], \"questions\": [\"Could the authors clarify how InfCycle\\u2019s approach specifically differs from Chain of Preference Optimization (CPO) and Step-DPO?\", \"Could the authors provide insights into the efficiency of the synthesized solutions? For instance, metrics on API call frequency or other indicators of solution efficiency would help to assess practical performance, complementing the Pass Rate and Win Rate metrics.\", \"Could the authors provide an overview of the computational cost involved in InfCycle\\u2019s multi-stage synthesis strategy?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper considers the problem of data synthesis for tool usage in LLMs. The key motivation is that by leveraging tools, e.g., API calls, and other inference-time computation, LLMs are able to perform test time adaptation. The key idea of this paper is to have the LLM generate it's own API queries and then evaluate if those API calls were successful and semantically correct. This is used in two ways. First, in order to guide test time inference, e.g., via a tree of thought like search, as well as to act as preference data for DPO.\\n\\nThe results indicate that this approach is able to improve several models across a variety of tasks.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The problem of self-learning and in particular usage of tools for problem solving is relevant and of interest.\\n2. The experimental evaluation is thorough and does seem to mostly back the claims presented in the paper.\\n3. The key idea of generating synthetic data via self generated and evaluated api calls is clear and applicable to a large number of domains.\", \"weaknesses\": \"1. The biggest weakness of this paper is presentation. There are many ideas and they don't seem to flow well between each other. As a concrete set of examples:\\n - Perhaps I'm missing something, but section 3 seems to be completely unrelated to the rest of the paper. The insight that process verifiers are better than correctness verifiers never seems to be build on. \\n - I had no idea that part of the proposal involved fine tuning the LLM using DPO until page 6. In retrospect, reading the abstract and introduction, I could piece this together, but it does not come across at all during the first reading.\\n\\n2. While the idea is effective, to me step-wise cycle just seems like another instantiation of the proposer/verifier split? Going back to presentation, cycle consistency is never formally defined, and so perhaps I'm misunderstanding, but this just seems to be checking the individual steps are correct?\", \"questions\": \"What *is* (in unambiguous formal language) cycle consistency? Please provide a definition!\\n\\nI am open to raising my score, but at the moment, the understanding I've implicitly gleaned from the paper looks incremental.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
CCoa6XgO8F
A Defense of One-Step Learning: Examining Single-Batch Distillations
[ "Connor Wilhelm", "Dan Ventura" ]
Dataset distillation produces a compressed synthetic dataset that approximates a large dataset or other learning task. A model can be trained on a distillation in a single gradient descent step. Conventional wisdom suggests that single-step learning is not generalizable and should yield poor performance; yet, distillation defies these expectations with good approximations of full direct-task training for a large distribution of models. In order to understand how distilled datasets can perform one-shot learning, we examine the distilled data instances and the cost surfaces produced by the distilled datasets. We demonstrate that the distilled dataset not only mimics features of the true dataset but also produces cost surfaces such that one-step training leads models from the initialization space into local minima of the true task's cost surface. This shows how one-step learning's counter-intuitive success is not only reasonable but also the expected outcome of dataset distillation.
[ "distillation", "interpretability", "explainability", "compression", "cost surface", "loss landscape" ]
Reject
https://openreview.net/pdf?id=CCoa6XgO8F
https://openreview.net/forum?id=CCoa6XgO8F
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wfKDxoraoO", "ucss8VPzXj", "uTwQQAQaJ3", "uE86ktxb5I", "tiX9Xe1VkX", "oBxPz0alMI", "n9IBZ6U0pF", "kfzdwlmdGe", "kcWeU1nUUq", "kGpRBZsMpJ", "hhxDFsRaQE", "c0tlqoepoS", "ZzgaQhJZRi", "YsFPkeqreT", "PfbLcINtiC", "JjBh5vFzjw", "CmjWuSHp1H", "ASvZrQKtB0", "8IftNcWNk8", "77Tr5QfURD", "6XpJ4ZWmqU" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "decision" ], "note_created": [ 1730650094635, 1732751541935, 1732750523982, 1732630741883, 1732618213055, 1733226567892, 1730640374920, 1730753601357, 1733158894271, 1733173163825, 1733262831787, 1733172906565, 1733172484364, 1733226135425, 1732647441646, 1733172262392, 1730598154946, 1730384166247, 1733172024355, 1734731554126, 1737523847555 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_N5Yq" ], [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_CKhM" ], [ "ICLR.cc/2025/Conference/Submission7563/Authors" ], [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_5y8o" ], [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_N5Yq" ], [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_5y8o" ], [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_XZV1" ], [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_CKhM" ], [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_5y8o" ], [ "ICLR.cc/2025/Conference/Submission7563/Authors" ], [ "ICLR.cc/2025/Conference/Submission7563/Authors" ], [ "ICLR.cc/2025/Conference/Submission7563/Authors" ], [ "ICLR.cc/2025/Conference/Submission7563/Authors" ], [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_5y8o" ], [ "ICLR.cc/2025/Conference/Submission7563/Authors" ], [ "ICLR.cc/2025/Conference/Submission7563/Authors" ], [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_7JzF" ], [ "ICLR.cc/2025/Conference/Submission7563/Reviewer_5y8o" ], [ "ICLR.cc/2025/Conference/Submission7563/Authors" ], [ "ICLR.cc/2025/Conference/Submission7563/Area_Chair_v73d" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This article explores the effectiveness of one-step learning by analyzing distilled data instances and cost surfaces. The experiments demonstrate that the distilled dataset not only replicates the characteristics of the real dataset but also generates suitable cost surfaces, enabling one-step training to guide the model from the initialization space to a local minimum of the actual task cost surface.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The experiments encompass a variety of benchmark tasks, including image recognition and reinforcement learning.\\n2. The article presents a task-agnostic distillation algorithm (Algorithm 1) and thoroughly details the various steps, parameter settings, and selection of loss functions relevant to different tasks in the distillation process. This clarity aids readers in understanding and replicating the experiments.\", \"weaknesses\": \"1. **Theoretical Depth**: While the effectiveness of the distilled dataset is demonstrated experimentally, the theoretical framework explaining why an appropriate cost surface emerges during the distillation process is somewhat lacking. The conclusions largely rely on empirical observations. The article primarily documents experimental details and phenomena, missing an in-depth analysis that could better inform the design and application of the methods discussed.\\n2. **Generality of Experimental Validation**: The authors have chosen algorithms and datasets that are too simplistic and limited. The method selected by the authors targets distillation with single-step training, as they mentioned, single-step learning is the expected outcome for distilling these types of datasets, but what about distillation methods for other categories? There are many new related meta-learning algorithms, as well as many methods outside of meta-learning (e.g., trajectory matching).\", \"questions\": \"1. The observations of the cost surface and loss curvature have also been observed, analyzed, and used as a basis for proposing improvement methods in the work of others[1]. They conducted a more detailed analysis of loss curvature. Have you combined your analysis with theirs?\\n2. The current experiments are too simplistic. To my knowledge, there are several advanced algorithms for dataset distillation that are not very computationally expensive, and these experiments are entirely affordable. Can the authors conduct experiments on more recent works and larger datasets?\\n\\n[1] Shin, Seungjae, et al. \\\"Loss-curvature matching for dataset selection and condensation.\\u201d\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Final review\", \"comment\": \"I'll maintain my rating.\"}", "{\"title\": \"Paper Revision Changes\", \"comment\": [\"We have implemented the following changes to the manuscript:\", \"Changed Figure 1 notation\", \"Corrected Figure 2 labels\", \"Replaced Figure 3 labels w/ text labels\", \"Minor formatting issues fixed: header corrected, minor edits to text\"]}", "{\"comment\": \"As it stands my rating of the paper will stay the same as the questions and weaknesses have not yet been addressed.\"}", "{\"comment\": \"I continue to uphold my evaluation, as the authors remained uninvolved in the rebuttal phase, leaving my concerns unresolved.\"}", "{\"comment\": \"## Questions\\n\\n1. It would be essential to show how your method compares against using a single batch of training data instead of the full set, given that one claim is that it allows for quicker production of the loss landscapes. I am also unsure what you mean by `minimum matching` The minima are not equivalent. How are they matched? When doing loss landscapes, point zero, zero is the current model's loss; it then explores the landscape from that point by perturbing the weights in two randomly orthogonal directions to give an overview of the general landscape. Therefore the model's current minima/position will be at zero, zero. How can one say they match? \\n\\n2. Great, this will significantly improve the paper.\\n\\n3. In future work, it is recommended to use the language of the literature and stay consistent with it. Otherwise, it requires the reader to hold both equivalent terms in their head instead of one. Sticking with loss landscapes is recommended, as this is more frequently used in literature. \\n## Overall \\nThank you for taking the time to respond. I, however, will not change my score. The paper is qualitative, and the minima matching terminology must be better explained. The responses and paper changes have yet to address my core concerns surrounding using metrics, additional architectures, figure scale matching, etc.\"}", "{\"summary\": \"This work explores how dataset distillation enables models to achieve effective one-shot learning from reinforcement learning perspective. Despite conventional belief that single-step learning would not generalize well and would perform poorly, distilled datasets allow models to closely approximate the results of direct-task training across a wide range of model architectures. The authors analyze both the distilled data instances and the cost surfaces they generate, finding that the distilled datasets not only replicate features of the original data but also shape cost surfaces that guide models from their initial states into local minima of the true task\\u2019s cost surface.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides an in-depth examination of cost surfaces generated by distilled datasets, offering a valuable perspective on how dataset distillation guides models to local minima. By analyzing the distilled instances, the authors demonstrate that the compressed data retains critical task-relevant features, which helps improve interpretability, especially for simpler tasks.\\n\\n2. To the best of the knowledge, this is the first paper that investigates the dataset distillation in reinforcement learning scenario.\", \"weaknesses\": [\"While the cost surfaces generated by distillations show promising results, the paper does not fully address potential scalability issues when applied to larger models or datasets, which could present computational challenges. I suggest the authors provide more theoretical verification for the claim.\", \"The study relies on one method of distillation, but an evaluation of alternative distillation methods could provide a broader understanding of how different techniques impact cost surfaces and model performance. Methods like DATM, SDC, IDC should also be considered as baselines for further exploration.\", \"[1] Guo Z, Wang K, Cazenavette G, et al. Towards lossless dataset distillation via difficulty-aligned trajectory matching[J]. arXiv preprint arXiv:2310.05773, 2023.\", \"[2] Wang S, Yang Y, Wang Q, et al. Not all samples should be utilized equally: Towards understanding and improving dataset distillation[J]. arXiv preprint arXiv:2408.12483, 2024.\", \"[3] Kim J H, Kim J, Oh S J, et al. Dataset condensation via efficient synthetic-data parameterization[C]//International Conference on Machine Learning. PMLR, 2022: 11102-11118.\", \"More complex datasets and benchmarks should also be considered. The experiments mainly focus on relatively simple datasets, such as MNIST and CIFAR-10, as well as cart-pole and Centipede environments. The paper could be strengthened by evaluating the method on more complex tasks, such as high-resolution image datasets or NLP tasks, to assess its scalability.\", \"This paper does not use the correct ICLR template. Specifically, there is no ''Under review as a conference paper at ICLR 2025'' note in the paper.\", \"Minor: some notations are wrong. For example, in figure 1, the authors write T0, T_0, and also TO, which generate mistakes. I suggest the authors polish the notations and representation. In line 161, there is a unexpected + symbol.\"], \"questions\": \"1. How does the proposed approach handle high-dimensional data, such as images with complex structures or datasets with numerous features? It would be beneficial to understand how the technique performs in such scenarios.\\n2. Given the success in various supervised and reinforcement learning tasks, could this method be extended to other domains, like natural language processing or time-series analysis?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a study that inspects the loss surfaces of distilled datasets. The authors show the results on various tasks, including Atari Centipede and standard image benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is easy to follow and understand\", \"The authors performed relatively though coverage on different domains.\"], \"weaknesses\": \"- I'm not sure what the paper is contributing. The distilled datasets can train a model that performs similarly, the results itself hint the loss surface probably shares similar local minima.\\n\\n- A lot of related works are missing. [1] analyzes the DD task in depth. [2] made multi-step BPTT work in DD. One easy question to ask and study further is, one-step BPTT (used in this paper) underperforms multi-step BPTT[2], what has been changed in the loss landscape when number of steps is increased?\\n\\n[1] What is Dataset Distillation Learning? icml'24\\n[2] Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks. neurips'22\", \"questions\": \"See above. I'm not sure what this paper is contributing and the current version seems to be not sufficient. Giving deeper understanding of DD is interesting and the authors should consider further complete the work to submit to another future venue.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for the changes to the PDF. However they have not address my core issues with the paper have not been sufficiently address. I will maintain my score.\"}", "{\"comment\": \"Thank you for your review. We apologize for our late response. We will examine the works you've provided and add them to the related works and examine where we might utilize their methods to strengthen our papers.\\n\\nAs far as the contribution, we believe the results we have achieved with one-step learning distillations and minimum-sized distillations are counterintuitive. Thus, our work attempts to find out how a model is capable of learning in one gradient descent step. We believe that prior works in distillation, which focus on interpreting distilled dataset instances, are missing the full picture. We assert that the full picture can only be seen by examining distillation as a learning task through the cost surface, rather than as disconnected instances in a dataset.\"}", "{\"comment\": \"We understand you keeping your original score. Thank you for your responses; we are grateful for your feedback which will help us strengthen our research and our paper.\"}", "{\"comment\": \"Thank you for your review. We apologize for our late responses. Your concerns cannot be fully addressed in the short time allotted for the discussion period. As you suggested, we will prioritize testing with other distillation methods. We will also focus on providing stronger theoretical backing for our work. We were unaware of the work you cited; thank you for bringing it to our attention. We will examine it and combine our analysis with theirs.\"}", "{\"comment\": [\"Thank you for your review. We apologize for our late response, yet we hope we can respond to a few of your concerns:\", \"W1: While we do not prove that distillation itself is scalable, that is well beyond the scope of this paper. The goal of the paper was to understand one-step learning, not necessarily prove one-step learning's usefulness. Our visualization method is scalable to distillations, as distillations are significantly smaller than the original tasks. We do admit that visualizing large datasets is computationally expensive using the method we applied, making the comparisons difficult. Yet, if our assertions about how the distilled surface mimics minima placement of the original surface, this would provide a clear use-case for distillation - examining complex cost surfaces with little computational cost.\", \"W2/3: We agree that testing our method on more distillation methods and tasks would strengthen our results.\", \"W4: Fixed\", \"W5: Fixed\", \"Q1: The most complex task we tested on was Atari Centipede.\", \"Q2: In theory, it could expand to any domain so long as there is a differentiable loss function.\"]}", "{\"comment\": \"## Clarity 1\\nConventional deep learning wisdom is a subjective term; here, it would depend on the literature known to the reviewer. To be more exact the text could be improved to be `it is a generally held practical belief that single-step learning is not generalisable and should yield poor performance`. This still considers the citation provided in the original review that shows full batch training can result in good-quality models. Still, it is not a practically viable option. At least to me, the result does not seem counter-intuitive; if the corset is a well-compressed form of the dataset, it would not surprise me that it could achieve comparable accuracy to the full dataset as a good compression would maintain a strong representation of the dataset with the fewest examples and thus full batch single step training could occur. But I can see how this would not be expected. \\n\\n## Clarity 2 \\n\\nThank you for explaining. \\n\\n## Qualitative Results\\n\\nThe argument of `close in parameter space` could potentially be measured by controlling the size of the distilled dataset and comparing the distance between the model's layers in weight space as they are trained on the larger distilled datasets. To provide an understanding of how the size of the distilled dataset affects this closeness. One would expect that as more data is provided, the models are closer to the model trained on the full dataset in weight space. This would significantly strengthen your argument; however, as it stands, this is missing and thus does not hold up. I also argue what it means to be `close in parameter space` in such a high-dimensional space; many metrics would be required to make such a claim. For a visual inspection, providing a radial slice of the loss landscape, as done in [2], may aid this investigation, but metrics would be required. \\n\\nIn addition, the loss landscapes are not that complex, which is probably due to the model's simplicity; exploring CIFAR10 with a VGG that demonstrates complex loss landscapes [4] could help further the perspective being claimed. Because it is hard to disentangle between the loss landscape being essentially bowl-like and the ease of the task, more complexity in the surface and your method showing similar troughs and peaks would add more validation behind the idea that it matches the loss surface. \\n\\nAlthough this cannot done within the time frame left, to strengthen this idea and add more support, it would be interesting to see if the two models can be connected and what the distance of connection is [1,3], are the minima in close to one another? Although an existing definition of close would be required to form the baseline.\\n\\n[1] Draxler, F., Veschgini, K., Salmhofer, M. and Hamprecht, F., 2018, July. Essentially no barriers in neural network energy landscape. In International conference on machine learning (pp. 1309-1318). PMLR.\\n[2] Fort, S., Hu, H. and Lakshminarayanan, B., 2019. Deep ensembles: A loss landscape perspective. arXiv preprint arXiv:1912.02757.\\n[3] Garipov, T., Izmailov, P., Podoprikhin, D., Vetrov, D. P., and Wilson, A. G. Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs. ArXiv e-prints, February 2018. URL http://arxiv.org/abs/1802.10026.\\n[4] Li, H., Xu, Z., Taylor, G., Studer, C. and Goldstein, T., 2018. Visualising the loss landscape of neural nets. Advances in neural information processing systems, 31.\\n\\n## Mismatch between definition and result:\\n\\nThis is understood, however needs to be made clear in the paper. I do agree that adding additional experiments and evaluating with different data amounts would help strengthen the arguments. \\n\\n## Figures \\n\\nThank you for fixing the figures,\", \"as_to_fig_4\": \"The caption does provide this context; however, it is not easy to process. As this is a big part of the story, it is essential to show which figure is more explicit. Making an explicit table with a more apparent heading would improve this.\\n\\nAs to Fig 8b, I understand that the distillation case can result in a model not landing on a minima. Still, I am not sure what you mean by `as the cost surface simply needs to map initialisations to minima of the original task, not necessarily minima of the distilled task.` what do is meant by map here? This terminology is used without being fully explained. \\n\\nAs to `CIFAR10 surfaces are shown in Figure 4b.` Sorry for the confusion; I meant the dataset examples, not the loss landscapes. \\n\\nAs to `This is not an extraordinary claim...` Sorry, this was later realised with the text; I would add it as shown in this paper to clarify that it is based on your results and has not been shown before.\"}", "{\"comment\": \"We thank the reviewers for their feedback and apologize for our late response. Due to time and resource limitations, we have set our priorities elsewhere, but with the extended discussion period, we will endeavor to respond to the questions and concerns of the reviewers as best we can.\\n\\nSince the first deadline is the paper PDF revision, we will focus on the revision before responding to individual reviewers. Note that some of the reviewers' concerns require significant time and resources to implement. We will attempt to rectify all the concerns that we can in this limited time, but we are likely not able to implement all the feedback at the moment. We will take this feedback into account for future revisions of the paper as needed, and we are grateful for the reviewers for their efforts in helping us improve our research.\"}", "{\"comment\": [\"Thank you for your review. We apologize for our late rebuttal, but we hope that we can clarify a few points:\", \"W1: We agree that testing on other distillation methods would strengthen our results.\", \"W2: This is a valuable point. We will apply your proposed method to our experiments and compare to our current results.\", \"Q1: We have not compared the two approaches directly. In our reinforcement learning experiments, the distillation clearly learns fine on later trajectories. If this were not the case, the model would be learning on data gathered only with poor RL policies and would not be able to reach the high performance seen in the RL experiments.\", \"Q2: We used the scaling of the magnitude of the trained network layers for all plots except where clearly stated.\"]}", "{\"summary\": \"This paper studies the performance of dataset distillation under the regime of one gradient update under the meta-model matching framework. The study shows that a single step gradient update distilled data can achieve decent task performance, and the distilled dataset having ideal properties with respect to the real data with similar data features and similar loss landscapes towards a specific local minima.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Since dataset distillation as a field right now is mostly empirical, the inner-workings on why it works is understudied. This work studies the loss landscape of the distilled data, which can provide important insights for better design of dataset distillation algorithms.\\n2. The study on RL datasets cart-pole and centipede is interesting since most of dataset distillation works focuses on image recognition.\\n3. The paper demonstrate that distilling with one gradient step can achieve decent task performance on CIFAR-10 with accuracy of 28% using fewer than 1 image per class through soft-labeling.\", \"weaknesses\": \"1. It is unclear whether findings in this paper will translate to other distillation algorithms, and therefore, how it will fit in the existing body of research. While the paper demonstrates the ability to achieve decent performance with one-step gradient update, the performance is very subpar compared to other distillation algorithms such as BPTT [1], which achieves 49% with 10 examples, or Trajectory Matching [2], which achieves 46% with 10 examples.\\n2. The method useful to justify why a solution is a local minima is not sound. Visualization through random vectors projection was originally designed to capture the non-convexity of the loss landscape and is insufficient for the understanding the optimization trajectory (section 7.1 of [3]). To better understand the optimization trajectory, visualization with PCA direction proposed in section 7.2 of [3] can be used. To quantitatively justify local minimum, one would have to reason about the sharpness (second derivative/Hessian) of the loss landscape [4]. \\n\\n[1] Deng, Zhiwei, and Olga Russakovsky. \\\"Remember the past: Distilling datasets into addressable memories for neural networks.\\\" Advances in Neural Information Processing Systems 35 (2022): 34391-34404.\\n\\n[2] Cazenavette, George, et al. \\\"Dataset distillation by matching training trajectories.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[3] Li, Hao, et al. \\\"Visualizing the loss landscape of neural nets.\\\" Advances in neural information processing systems 31 (2018).\\n\\n[4] Yao, Zhewei, et al. \\\"Pyhessian: Neural networks through the lens of the hessian.\\\" 2020 IEEE international conference on big data (Big data). IEEE, 2020.\", \"questions\": \"1. Curious towards whether one-step learning generate datasets with different properties compared to multi-step learning approaches such as BPTT. Existing work shown that data distilled with popular algorithm seems to capture early trajectories rather than specific local minimum [1]. Curious whether authors have any insights on whether single-step learning changes this property?\\n2. One potential issue with visualization through random projections is that every solution will look like a minima with sufficiently large step size. What does the loss landscape look like for CIFAR-10 with smaller step sizes?\\n\\n[1] Yang, William, et al. \\\"What is Dataset Distillation Learning?.\\\" Forty-first International Conference on Machine Learning.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores and tries to understand why extreme dataset distillation, where the number of samples is less than the number of classes and synthetically generated, can be used to train a model and the model can achieve comparable accuracy to a model trained on the full dataset, in the setting where one step learning is used. This is explored in the MNIST, CIFAR10 supervised image classification task as a well as the cart-pole and Centipede reinforcement learning tasks. The paper suggests that the reason why extreme dataset distillation can perform well is that the models go to a local minima that is in the same space as the full datasets minima. To explore this they plot the loss landscapes of a model trained on the full dataset and the model trained on the distilled dataset, and then compare there loss landscapes made with the distilled and full dataset. Therefore the overlap between these landscapes, explains the success of extreme dataset distillation and why it is to be expected.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Introduction is very clear, engaging and the problem is very well motivated, pleasure to read.\\n\\n2. Contributions are layout clearly.\\n\\n3. Two different modalities explored Vision and Reinforcement Learning.\\n\\n4. The idea to explore why single batch distillations works is very interesting.\", \"weaknesses\": \"**Clarity**\", \"in_the_abstract_line_13_14\": \"\\\"Conventional wisdom suggests that single-step learning is not generalisable and should yield poor performance\\\" I don't think this to be the case, as this paper [1] shows that stochastic training is not required for generalisation but if one is to use single-step learning, i.e. full-batch gradient decent, with a lot of explicit regularisation can achieve comparable accuracy to using SGD on CIFAR10.\\n\\nIn Section 2.2: DISTILLATIONS USED IN EXPERIMENTS, only the average result of training 1000 models is reported. Could you also report the standard deviation of these models? Also, why did you select 1000 instead of any other number? Why are the conventional trained models only trained once with the performance reported instead of averaged across 5 models? Why is train accuracy not reported?\\n\\n**Qualitative Results:**\\n\\nThe comparison of the loss landscapes is poor as no metric is provided; even though the colours are the same, the values are not, making it hard to compare. To compare visually, ensure the images all use the exact colour mapping. With this, the idea of minima-matching needs to be adequately explained, other than stating the model achieves a low loss at/near the centre; however, these loss values are massively different. How are the loss landscapes the same or approximately matched?\\n\\n**Mismatch between definition and result:**\", \"line_167_169\": \"\\\"While the CIFAR-10 distillation did not converge well, perhaps due to the model's risk of overfitting, the results demonstrate that even poor distillations function similarly to well-converged ones.\\\" There needs to be more evidence to support this claim; the MNIST case performs 14.1% worse than the entire dataset, suggesting that dataset distillation in this case will lead to poor-performing models regardless. I would go as far to say that this goes against the statement in the introduction line 036: \\\"The distillation-trained model should perform comparably to the model trained on the original task.\\\" a 14.1% and 35.7% difference in the test accuracy is far from comparable accuracy. This is also the case for the reinforcement learning task with a large difference between Centipede of 1084 and 2D cart-pole: 134.1\\n\\n**Figures**\", \"figure_2\": \"1-D cart pole. The directions are wrong on the second image; both images say 3.1% left and 96.9% right, irrespective of the direction of the 1-D cart pole. From the caption (Line 236:237: \\\"In 1D cart-pole, the state and labels clearly show that the agent should move in the direction in which the pole is leaning.\\\") it is my understanding that the second figure should have a bigger value on the left than on the right, as the pole is leaning left.\", \"figure_3a\": \"The softmax probability of the classes is hard to read- can the values be reported instead of the colour gradient, which is hard to read?\\n\\nFigure 4 needs to be explained clearly. It is difficult to tell which direction the trained (rows) and the datasets (columns). Could it be made more apparent?\\n\\nFigure 8b) It is hard to tell the difference between the initialisation and the distil-trained; it appears that the distil-trained is on a higher part of the loss landscape and that the initialised model is closer to the minima.\\n\\nWhy are the CIFAR10 Single Batch dataset images excluded from the paper? I would have liked to have seen them at least added to the appendix. \\nz\\n**Minor points:**\\n\\nLines 48-49 need a citation; this is a bold statement that I would like verified.\", \"line_161\": \"Starts with a \\\"+\\\"\", \"line_167\": \"Please state the training accuracy instead of \\\"(despite near-perfect training set accuracy)\\\"\\n\\n\\n**Overall:**\\n\\nThe analysis and experimental setup is lacking, it is not made clear how comparisons are made other than visual inspection- which is warped due to having the colour maps the same even though the ranges are different. It is an interesting idea being explored, however the models do not achieve comparable accuracy, suggesting that the datasets are poor themselves. The hypothesis does make sense that a good distilled dataset should result in a similar loss landscape to the full dataset, as it adequately captures the distribution of the data such that it is represented, however I do not think this is clearly explored or shown here especially as the loss is so high.\\n\\n\\n**References**\\n\\n[1] Geiping, J., Goldblum, M., Pope, P.E., Moeller, M. and Goldstein, T., 2021. Stochastic training is not necessary for generalization. arXiv preprint arXiv:2109.14119.\", \"questions\": \"Please see the questions and comments in the weakness section with the following more concretely:\\n\\nThe idea of minima-matching needs to be adequately explained; I am unsure what you mean; could you better explain how the loss landscapes are compared? Is it a local or global comparison? Using a metric would make this objective; even something as simple as their absolute difference would be great. \\n\\nWhy, for the CIFAR10 dataset, do you use the same architecture as the MNIST, given that it does not produce good accuracy when trained conventionally? Why are not more common architectures such as a ResNet[1] or VGG[2]? \\n\\nI need clarification on why there is an attempt at re-framing loss landscapes to cost surfaces- could you elaborate more on why you chose to refer to the spaces as cost surfaces instead of loss landscapes?\\n\\n\\n[1] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition 2016 (pp. 770-778).\\n\\n[2] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. 2014 Sep 4.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your review. We apologize for our late response. We hope these points help clarify your concerns:\\n\\n**Weaknesses**\", \"clarity\": [\"While the provided reference supports the idea of one-step learning, we would argue that it also goes against conventional deep learning wisdom. We are not claiming that the conventional wisdom is correct; quite the opposite. In addition, we are not training on the full dataset, but a dataset several orders of magnitude smaller.\", \"We considered 1000 runs to be sufficient to overcome the randomness of the random initialization and the randomness in each environment. For the RL experiments, we have also tested with 100 distillation-trained models over 100 episodes and reached similar results; thus, we assert that 1000 runs is sufficient. We agree that more runs with the original models would provide a fairer comparison.\"], \"qualitative_results\": [\"We agree that metrics would provide clearer comparisons. While the loss values are different, they are not expected to be the same. Distillation must train a model in a single step of learning, thus there is no mechanism forcing the loss values to reflect those of the original dataset. Rather, we argue that distillation meta-learning creates minima in parameter space close to those of the original dataset.\"], \"mismatch_between_definition_and_result\": [\"While the distillations do not reach the same performance as models trained on the original datasets, this learning was performed in a single step on less than one instance per class/action. In distillation, performance and compression are competing metrics. We agree that evaluating other distillations that prioritize performance over compression would strengthen our arguments.\"], \"figures\": [\"Fig 2: Fixed\", \"Fig 3a: Fixed\", \"Fig 4: Noted, this could be made more clear visually. As stated in the the caption, the rows represent the training method (i.e. the centerpoint of the plot) and the columns represent the cost values used to produce the contours.\", \"Fig 8b: Noted, the point colors are difficult to distinguish at the plot\\u2019s scale. You\\u2019re observation is correct, as noted in the caption, in this case the distillation does not converge to a minimum. This is a behavior possible with one-step learning, as the cost surface simply needs to map initializations to minima of the original task, not necessarily minima of the distilled task.\", \"CIFAR10 surfaces are shown in Figure 4b.\", \"This is not an extraordinary claim, the results are those reported in the paper and are the distillations used in the visualizations throughout. The 6-instance distillation of MNIST is shown in Figure 3.\", \"Fixed\", \"Agreed. We will provide the exact values for training accuracies, as well as standard deviations for all reported results.\", \"**Questions**\", \"We agree that a metric would strengthen our examination of the minima. By minimum matching, we refer to the position of minima in parameter space, not necessarily the value or sharpness of the minima.\", \"We will test with other architectures.\", \"We believe the terms are equivalent.\"]}", "{\"metareview\": \"This paper examines how distilled datasets can enable one-shot learning by analyzing the distilled data and the cost surface of the distilled dataset. The authors found that the distilled data not only mimics the features of the real dataset but also helps models reach the local minima of the real dataset with one-shot learning.\\n\\nReviewers\\u2019 comments highlight several strengths of the paper. All reviewers agree that the study is comprehensive, as it covers multiple domains. Reviewers (XZV1, 7zJF, N5Yq) commend the paper for its in-depth examination of the cost surface produced by the distilled dataset. Reviewers (CKhm, 5y8o) praise the readability of the paper.\\n\\nHowever, reviewers also raised several concerns. These include the lack of discussion on related works (Reviewer CKhM), questions about the validity of the contribution (Reviewer CKhM, 7zJF), the absence of a theoretical explanation for the distillation process (Reviewer N5Yq, XZV1), insufficient experiments on larger datasets, benchmarks, and alternative distillation algorithms (Reviewer XZV1, N5Yq, 7zJF), issues with the method's soundness in explaining why a solution is a local minimum (Reviewer XZV1, 5y8o), and a lack of clear presentation (Reviewer XZV1, 5y8o).\\n\\nDuring the rebuttal phase, the authors acknowledged the weaknesses pointed out by the reviewers and attempted to address their concerns. However, many of these issues remain unresolved.\\n\\nAfter careful consideration of all factors, the AC recommends rejecting the paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, reviewer 5y8o questioned the assertion that single-step learning performs poorly, the unclear presentation, and insufficient evidence for some claims. Although the concern regarding the number of runs was addressed, other concerns by reviewer 5y8o had not yet been addressed. Moreover, the concerns of the reviewers (CKhm,N5Yq, XZV1) were not fully addressed, such as the absence of a theoretical explanation, the validity of the contribution, and so on. Given that most of the reviewers' concerns were not resolved, AC recommends rejecting the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
CCUrU4A92S
Re-examining learning linear functions in context
[ "Omar NAIM", "Guilhem Fouilhé", "Nicholas Asher" ]
In context learning (ICL) is an attractive method of solving a wide range of problems. Inspired by Garg et al., we look closely at ICL in a variety of train and test settings for several transformer models of different sizes trained from scratch. Our study complements prior work by pointing out several systematic failures of these models to generalize to data not in the training distribution, thereby showing some limitations of ICL. We find that models adopt a strategy for this task that is very different from standard solutions.
[ "In context learning", "GPT", "limitations" ]
https://openreview.net/pdf?id=CCUrU4A92S
https://openreview.net/forum?id=CCUrU4A92S
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vnXQhGncAA", "vBZhpLOsBf", "rumG5DaMzT", "mOUC80ux20", "lQxEriSkiU", "gqo7r6NQzu", "ZvUi4gEQiY", "XOaQyToWlx", "WMN56mflBf", "Rodj67kxD0", "QcJWPEv0X3", "NRJdFvL7TR", "KLvh3kCWq3", "IIFBuZkFTJ", "B0ue9jWN0j", "8kUInZya3u", "3kpvUspwnZ" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732521144912, 1731530063463, 1730361115636, 1730491434075, 1731530174122, 1731530394318, 1732468771333, 1732779255613, 1732509799237, 1731529932884, 1732431466565, 1732610034296, 1730933597655, 1732604709533, 1730678727833, 1732521685059, 1732434698548 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14174/Authors" ], [ "ICLR.cc/2025/Conference/Submission14174/Authors" ], [ "ICLR.cc/2025/Conference/Submission14174/Reviewer_i5eC" ], [ "ICLR.cc/2025/Conference/Submission14174/Reviewer_2Uvx" ], [ "ICLR.cc/2025/Conference/Submission14174/Authors" ], [ "ICLR.cc/2025/Conference/Submission14174/Authors" ], [ "ICLR.cc/2025/Conference/Submission14174/Reviewer_2Uvx" ], [ "ICLR.cc/2025/Conference/Submission14174/Authors" ], [ "ICLR.cc/2025/Conference/Submission14174/Reviewer_zJMu" ], [ "ICLR.cc/2025/Conference/Submission14174/Authors" ], [ "ICLR.cc/2025/Conference/Submission14174/Reviewer_2Uvx" ], [ "ICLR.cc/2025/Conference/Submission14174/Reviewer_i5eC" ], [ "ICLR.cc/2025/Conference/Submission14174/Reviewer_JBsD" ], [ "ICLR.cc/2025/Conference/Submission14174/Authors" ], [ "ICLR.cc/2025/Conference/Submission14174/Reviewer_zJMu" ], [ "ICLR.cc/2025/Conference/Submission14174/Authors" ], [ "ICLR.cc/2025/Conference/Submission14174/Authors" ] ], "structured_content_str": [ "{\"title\": \"About Zhang et al.\", \"comment\": \"Thank you for your clarification. We did try to distinguish ourselves from Zhang et al 2024 or [1] in comments we made to another reviewer. Here is what we said.\\n\\nThough similar to our work, [1] works with linear attention, whereas we look at attention layers as they actually are used with softmax (thus one could criticize [1] in the way you do in point 1 more than us at least on this score). In addition, [1] uses a new kind of optimization or training with gradients and a special fixed initial point. This means that their architecture and training are quite different from what normally happens with transformers; they are interested in getting a revised transformer-like model to learn linear functions, while we want to find out whether transformers as they actually are learn linear functions or something else. The results for the two architectures are quite different: While [1] says task shift does not affect their models, our task shifts affect the results in an important way, where we take D^{train}_H = N(0,1) (Our D_F) but D^{test}_H = N(0, \\\\sigma) (our D^test_H) for 1 \\\\leq \\\\sigma \\\\leq 10. Figure 1 clearly shows that for transformer models with soft attention, this task shift reduces performance dramatically. We also note unlike [1] that prompts that are too long induce chaotic behavior.\\n\\nIn the covariate shift [1] also does something different from what we do. In covariate shift in [1], the distribution in the prompt is shifted but the distribution of the query stays the same. We do something different. When we take a distribution over input points in train D_I and set D^test_I \\\\neq D_I, our shift is not the same; we shift both prompt and query distributions. With covariate shifts we found that the choice of points is important and model performance degrades considerably when the values of the functions on the chosen points lie beyond what we call boundary values. As far as we know we are the first to point out these boundary values and their dependence on model parameters. At least [1] does not do this. Our mathematical formulation of what models do, explains their behavior. \\n\\nYou are right that we did not put all of this in the paper, and we should have a longer discussion. We will do this in a revised version.\"}", "{\"comment\": \"We thank the reviewer for the comments.\\n\\nWith regards to a more thorough lit review, we will add references to the revised version. After submission to ICLR, we have already added related work not cited in the submitted version : Fu et al. (2023), Xie et al. (2021), Wu et al.(2023), Zhang et al. (2023), Panwar et al. (2023), Bai et al. (2024). We would be glad to have more concrete suggestions about what to add over and above what the other reviewers have suggested. \\n\\nWe have posted a new version of the paper with an improved organization and hopefully this will answer your criticism\\n\\nWith regards to notation, we will clarify f_{i,\\\\sigma}. What it means is this: f_{i,\\\\sigma} is the ith function sampled from N(0,\\\\sigma] for 1 \\\\leq \\\\sgma \\\\leq 10. We will clarify this notation. We apologize for D^T_F which is a typo, D^t_F is the test distribution for the functions. D^t_I is the distribution for the points x_i in the sequences to which the functions are applied; D_I is the distribution for the training sequences. We will state this more clearly in the final version.\\n\\nClarify the rationale for studying models of different scales and discuss what insights are gained from these comparisons. We wanted to see which components in actual transformer based models were responsible for ICL and we wanted to see whether ICL improved with scale. This puts us apart from a lot of the literature that has tried to show that under certain assumptions transformers CAN do ICL. We want to know what they DO in practice. This was why we tested a number of models, including Attention Only models and MLP only models. We will improve our description of the results in Table 1 and in section 4.6 where we summarize our findings.\\n\\nThe reason we speak of coefficients a, b \\\\in [-1,1] for functions ax +b is that given N(0,1) the model has seen functions with those coefficients many times (~70% of its training data in D_F), and that makes a difference to the overall success of its algorithm. For instance consider the plots in Figure 4 where the coefficients are large This discussion is at the end of the background section. This is regardless of the size of the model though the largest models tested have essentially 0 error and the small ones have an error of 0.1 (see our Table 1).\"}", "{\"summary\": \"The paper investigates in-context learning (ICL) across various training and testing scenarios using different sizes of transformer models trained from scratch. Building on previous work, it highlights systematic failures in these models' ability to generalize to data outside the training distribution, revealing some limitations of ICL.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper focuses on an important and challenging problem: understanding the in-context ability of language models.\\n2. The writing is clear and easy to understand.\\n3. The authors provide code and detailed instructions for reproduction.\", \"weaknesses\": \"1. The models and empirical studies in the paper differ significantly from current large language models, potentially creating a gap between the claims and reality.\\n2. The findings of the paper have been previously proposed in other works.\\n3. The paper is missing some key references.\", \"questions\": \"Can you explain how your findings differ from the following paper? In particular, [1] also discusses how distribution influences the in-context learning ability to learn linear functions.\\n\\n[1] Trained Transformers Learn Linear Models In-Context.\\n[2] Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The study investigates in-context learning (ICL) in transformer models, focusing on their ability to learn and generalize linear functions from contextual prompts. Inspired by previous work, the authors examine various transformer models, including small ones trained from scratch, to explore whether they can learn linear functions and generalize beyond the training distribution.\\n\\nHowever, there are two main problems in this paper:\\n\\n### 1. The writing problem: There are many typos, e.g., in ``line 047'', there should be a ''.'' after ''training data''.\\n### 2. Novelty: The paper indeed provides robust experiments to show the main point, but it lacks novelty, such as how to improve this problem.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper has the following strengths:\\n\\n### 1. Clear Motivation: The paper begins with a well-defined motivation, addressing gaps in the current understanding of in-context learning (ICL) in transformer models, especially for generalization.\\n\\n### 2. Comprehensive Experiments: The experiments cover various transformer architectures and test them on various distributions.\", \"weaknesses\": \"The paper has the following weaknesses:\\n\\n### 1. Clarify Terminology and Notation: The writing is a little poor. For example, in ``line 047'', there should be a ''.'' after ''training data''. Furthermore, the table should be in a more beautiful structure.\\n\\n### 2. Explanation for the Problem: Although the paper provides various experiments, it should explain the failures of these models to generalize to data not in the training distribution.\\n\\n### 3. Novelty: The paper provides robust experiments to show the main point but lacks novelty, such as how to improve this problem.\", \"questions\": \"1.. Explanation for the Problem: Could you please explain the failures of these models to generalize to data not in the training distribution?\\n2. Could you please provide some methods to improve this problem?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their comments.\\n\\nWe will clarify our notation in the final version.\\n\\nWith regard to an explanation of the failures of these models to generalize to data not in the training distribution, the last section explains that the model is doing something different and the mathematical model in the last section explains the model behavior.\\n\\t\\nTo answer your question concerning failure to generalize, please see the last section of the paper. We have clarified our mathematical model and our notation in a revised version of our paper that we have put on the site.\\n\\t\\nTo improve the method is an important concern. Of course you could program it to do linear regression. The problem is that this isn\\u2019t learning the function class. But we feel that learning linear regression is not really the issue here. The issue is figuring out what transformers are doing in ICL for simple tasks so that we have a hope of understanding what they really do in more complex tasks.\", \"title\": \"officieal comment\"}", "{\"comment\": \"We thank the reviewer for the comments.\\n\\nWe address the following three concerns\\n1- The models and empirical studies in the paper differ significantly from current large language models, potentially creating a gap between the claims and reality.\\n2- The findings of the paper have been previously proposed in other works.\\n3- The paper is missing some key references.\\n\\nAs we mentioned we need small models to do training from scratch. We need to train from scratch to avoid \\u201cleakage\\u201d from uncontrolled pretraining. We did query larger models but we cannot train them from scratch. \\n\\nThank you for pointing out these two references. They are very helpful and we will certainly cite and discuss them in the final version. Nevertheless with respect to [1], though similar to our work, [1] works with linear attention, whereas we look at attention layers as they actually are used with softmax (thus one could criticize [1] in the way you do in point 1 more than us at least on this score). In addition, [1] uses a new kind of optimization or training with gradients and a special fixed initial point. This means that their architecture and training are quite different from what normally happens with transformers; they are interested in getting a revised transformer-like model to learn linear functions, while we want to find out whether transformers as they actually are learn linear functions or something else. The results for the two architectures are quite different: While [1] says task shift does not affect their models, our task shifts affect the results in an important way, where we take D^{train}_H = N(0,1) (Our D_F) but D^{test}_H = N(0, \\\\sigma) (our D^test_H) for 1 \\\\leq \\\\sigma \\\\leq 10. Figure 1 clearly shows that for transformer models with soft attention, this task shift reduces performance dramatically. We also note unlike [1] that prompts that are too long induce chaotic behavior.\\n \\nIn the covariate shift [1] also does something different from what we do. In covariate shift in [1], the distribution in the prompt is shifted but the distribution of the query stays the same. We do something different. When we take a distribution over input points in train D_I and set D^test_I \\\\neq D_I, our shift is not the same; we shift both prompt and query distributions. With covariate shifts we found that the choice of points is important and model performance degrades considerably when the values of the functions on the chosen points lie beyond what we call boundary values. As far as we know we are the first to point out these boundary values and their dependence on model parameters. At least [1] does not do this. Our mathematical formulation of what models do, explains their behavior. We will highlight this in the revised version.\\n \\nWith respect to suggested reference [2], they largely follow what many other ICL papers do\\u2014offer a proof by construction that under certain assumptions transformers can implement many different algorithms for computing linear functions and other tasks as well. Their empirical experiments show that under suitable training and testing distributions for sampling, transformer models can learn such algorithms. They signal something like our boundary values B, -B and propose to ignore values outside [B, -B] by \\u201cclipping\\u201d them. We take a very different approach. By ignoring these outside values, we don\\u2019t really know what algorithm transformer models have implemented; we demonstrate with out of distribution robustness tests that transformer models don\\u2019t use any of the standard algorithms (ridge regression, linear regression,...) but do a different kind of projection and interpolation. \\nWe have included a short discussion of these 2 papers in the revised submission; hopefully the revised version will clarify and address your concerns.\"}", "{\"comment\": \"Dear authors,\\n\\n Thanks for your further explanation. However, I plan to keep my score. Thanks for your time!\\n\\nReviewer 2Uvx\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thank you for revising the paper. Below are my comments:\\n\\nWhile the paper highlights that several works claim Transformers can solve linear regression, it is important to acknowledge that a line of work has already identified counterexamples and brought this issue to the community\\u2019s attention. I suggest that the authors discuss and credit these efforts in the introduction or related work section, rather than implying that this paper is the first to address this issue. For instance, the generalization error in Transformers has been recognized in Garg et al.\\u2019s work, and other studies have examined similar issues in out-of-distribution (OOD) settings. Giannou et al. explored boundary value limitations in OOD ranges, while Zhang et al. addressed context-length generalization challenges. These studies collectively show a gap between theoretical expectations and experimental results, indicating that Transformers cannot solve linear regression reliably except in very simple scenarios.\", \"relevant_references_include\": [\"Liu, Jerry Weihong, et al. \\u201cCan Transformers Solve Least Squares to High Precision?.\\u201d ICML 2024 Workshop on In-Context Learning.\", \"Shen, Lingfeng, Aayush Mishra, and Daniel Khashabi. \\u201cDo pretrained Transformers Really Learn In-context by Gradient Descent?.\\u201d arXiv preprint arXiv:2310.08540 (2023).\", \"Giannou, Angeliki, et al. \\u201cHow Well Can Transformers Emulate In-context Newton\\u2019s Method?.\\u201d arXiv preprint arXiv:2403.03183 (2024).\", \"Zhang, Ruiqi, Spencer Frei, and Peter L. Bartlett. \\u201cTrained transformers learn linear models in-context.\\u201d arXiv preprint arXiv:2306.09927 (2023).\", \"The contribution of this work appears limited in scope. As noted above, the issue of Transformers failing to generalize has already been studied from various perspectives, such as sparsity in trained model weights and error precision in learned representations. While this paper adds evidence that Transformers struggle with strictly increasing or decreasing linear regression problems and boundary value issues, it primarily offers a hypothesis without substantial supporting evidence, whether theoretical or through mechanical interpolation. Expanding the work with stronger evidence or a deeper theoretical framework could significantly strengthen the contribution.\"]}", "{\"comment\": \"We thank the reviewer for the remarks and questions.\\n\\nWe agree with the reviewer that it is an interesting question why the models don\\u2019t do what they are capable of. But we were interested in a different, equally interesting (to us) question: what are the models actually doing in this very simple task? First we showed that they are not learning an algorithm like least squares. In the last section of the paper, we provide a mathematical model of what they are doing; they are treating the sequences not as the graph of a function but as just a sequence. They develop an algorithm using something like Olsson et al\\u2019s induction heads to interpolate a next value for the given sequence in the prompt from close-by sequences in their pre-training.\\n\\nDid we try to train with multiple distributions? Yes in section 4 we look at three different distributions for function sampling: N, bimodal and uniform (section 4.2). \\n \\nWe do say what the models are actually learning, see section 5 of paper. We will clarify that section in our revised version. We have highlighted it also in the abstract.\\n\\nWe will clarify that current training methods are not optimal. We wanted to investigate training and prompting methods already in the literature; we show that the proposed methods don't do what is claimed.\\n\\nWe will certainly cite and discuss Giannou, Angeliki, et al. \\\"How Well Can Transformers Emulate In-context Newton's Method?\\u201d, an interesting paper, in our revised version. It uses linear self attention, which makes sense for the theoretical results, since they want to show that LLMs can in principle approximate Newton\\u2019s method. In this, they are like many other papers we cite, seeking to explain a model behavior by a theoretical construction. Our undertaking is different. We start from actual transformer architectures with soft attention and try to determine what they are actually doing in this task. We in fact show that they don\\u2019t do any of the reconstructions that we have seen proposed in the literature. \\nGiannou et al. also only examine differences in sampling the sequences of points in the prompt; i.e. they look at in our notation D_I \\\\neq D^t_I, where D_I is the training distribution of points and D^t_I is the test distribution. In particular they fix a particular function and then input into it points that are in the training distribution and points outside of distribution. In distribution is defined as having been very likely to have been seen in training. However, when we have looked at cases where D_I \\\\neq D^t_I, we saw that sometimes for input values within distribution, we got bad results (see figure 4 for example) when the function\\u2019s values on those points were outside what we call boundary values. \\nIn addition to examining D_I \\\\neq D^t_I, we also vary the distributions from which we sample functions, D_F and D^t_F\\u2013i.e. values of the points chosen. Giannou et al did not do this.\\n \\nFinally, to test Giannou et al use only very small models 4 ah 64 embeddings and go only to 6 layers, we go up to 12L8AH and d_emb = 256.\\nThey observe that after four layers, there is no significant improvement in performance by adding more layers. Those results are different from what we find (maybe due to their use of linear attention). We have included some comments on Giannou et al and clarified how we differ from the related work we know about in a revised version of the paper that we have submitted on the site.\"}", "{\"comment\": \"Thanks for the response. The response answers my questions. However, I still think the paper lacks novelty and is not ready to be published. Thus, I keep my score as 3.\"}", "{\"comment\": \"Thank you for your response. Having considered your rebuttal and the other reviews, I maintain my original score.\"}", "{\"summary\": \"This paper studies experimentally the setting of in-context learning linear regression . The authors reproduce the experiments of Garg et al and at inference time test the models with 1) different distributions for the input/weight vectors 2) larger values for the input/weight vectors.\\nBased on the observations of these results the authors argue that these models do not learn some type of algorithm.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Understanding what these models learn even in the setting of linear regression can significantly enhance our understanding of their capabilities limitations. Indeed it has been observed that the models do not generalize in out-of-distribution samples and thus it is unclear whether these models learn some type of algorithm.\", \"weaknesses\": \"1. The provided experimental study does not explain what these models are actually learning. For example it can be the case that the model are learning a tailor-made preconditioned gradient descent type of algorithm, with the preconditioned matrix being optimal for the in-distribution values and sub-optimal for out-of-distribution values.\\n2. It cannot be excluded that the current training methods are not optimal, since we know that these models do have the capability of representing these algorithms. \\n3. Some of these results have already been observed experimentally for example see [1] (Figures 5,6). In these experiments consider multi-dimensional linear regression, they keep all expect for one dimension fixed and plot how the function changes when varying one dimension from [-B,B] similar to the authors' experiments for one dimensional linear regression.\\n\\nIn general the main weakness of this paper is that it does not make a convincing argument towards what these models are actually learning. \\n[1]: Giannou, Angeliki, et al. \\\"How Well Can Transformers Emulate In-context Newton's Method?.\\\" arXiv preprint arXiv:2403.03183 (2024).\", \"questions\": \"I agree with the authors that these models do not exactly learn some type of algorithm but I think that the main question is why these models do not do so while they have the expressivity ? One possible explanation is that there exist parameters that better interpolate the specific distributions, while existing algorithms work for any type of distribution.\\n\\nDid the authors try to train the models with multiple distributions? It could be the case that then the models are able to perform some type of algorithm by not fine-tuning their weights to fit a specific distribution. Furthermore, considering the second point above, did the authors perform a search over the hyperparameters for training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"edited comment on Giannou et al. and a new version of the paper with comments on Zhang et al and Giannou et al.\", \"comment\": \"Please find above a revised version of our comments on Giannou et al. and also see a revised version of the submission with comments included.. The new version defends our positive proposal for what the models are doing in more depth. We hope to have addressed your concerns.\"}", "{\"summary\": \"The paper investigates Transformer behavior when trained from scratch to perform linear regression. It examines out-of-distribution (OOD) generalization across various settings, such as different ranges and distributions of linear functions.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper conducts thorough experiments across various scales and settings, providing a comprehensive analysis of Transformer behavior.\", \"weaknesses\": \"1. The related work could benefit from a more comprehensive review. The paper primarily discusses the works of Garg et al., Aky\\u00fcrek et al., and Von Oswald et al. on regression for in-context learning (ICL), but there are additional relevant studies in this area that are not cited. A more thorough literature review, covering empirical and theoretical works on regression in ICL, would enhance the paper\\u2019s context. Checking recent citations in this line of research may help identify key studies to include.\\n\\n2. The notation in Section 4 could be clarified, as some symbols are difficult to interpret. For example, it\\u2019s not immediately clear what $\\\\sigma$ represents in the context of $f_{i, \\\\sigma}$. Additional explanations could help improve readability.\\n\\n3. The organization of the paper could be refined to improve the overall flow. At times, the presentation feels somewhat informal, with experiments presented in sequence without clear connections, motivations, or in-depth analyses. For instance, it would be helpful if the authors could clarify the rationale for studying models of different scales and discuss what insights are gained from these comparisons. Additionally, mixing experiments on different scales and distributions makes it challenging to understand the primary conclusions. This structure could make it clearer to the reader what the authors aim to convey.\\n\\nIn general, I appreciate that the authors highlight the out-of-distribution (OOD) generalization issue for Transformers trained on linear regression, as initially noted by Garg et al. However, the experimental findings in Section 4 could be more impactful with clearer motivations and discussions. The hypothesis regarding induction heads and their role in OOD performance is somewhat interesting, though it could be strengthened with supporting theoretical insights or experimental validations, such as through mechanical interpolation. Presenting this hypothesis with additional rigor could provide more substantial contributions to the community.\", \"questions\": \"1. While it seems intuitive that, for instance, \\\"9L6AH\\\" refers to a model with 9 layers and 6 attention heads, this notation is somewhat non-standard. It would be helpful if the authors could define this notation explicitly before using it. Many other notations in the paper also follow this informal style, though I haven\\u2019t listed each instance. It would be beneficial if the authors could standardize and define these terms clearly at the outset.\\n\\n2. In line 193, could the authors clarify whether it is $D^T_F$ or $D^t_F$? There are also several other typos throughout the paper that I haven't enumerated. Clarifying these would improve overall readability and precision.\\n\\n3. In line 190, it\\u2019s unclear why the authors mention that the coefficients are in the range $[-1, 1]$, as this differs from the $N(0, 1)$ distribution. Additionally, there is no supporting figure or result indicating that coefficients within $[-1, 1]$ lead to zero MSE error. Given that I generally observe non-zero but small MSE error, it would be helpful if the authors could clarify this paragraph, particularly regarding the model size required to achieve zero average MSE error.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"about Giannou et al.\", \"comment\": \"We also want to clarify our relation to Giannou et al. We cited and briefly discussed Giannou, Angeliki, et al. \\\"How Well Can Transformers Emulate In-context Newton's Method?\\u201d in the revised version on the site and note that they have something like boundary values. But their set up like Zhang et al's is also quite different from ours.\\n\\nThere are three important differences. First, Giannou et al use linear self attention, which makes sense for the theoretical results, since they want to show that LLMs can in principle approximate Newton\\u2019s method. In this, they are like many other papers we cite, seeking to explain a model behavior by a theoretical construction. Our undertaking is different. We start from actual transformer architectures with soft attention and try to determine what they are actually doing in this task. We show that our models don\\u2019t do any of the reconstructions that we have seen proposed in the literature. \\n\\nSecond, Giannou et al. also only examine differences in sampling the sequences of points in the prompt; i.e. they look at in our notation D_I \\\\neq D^t_I, where D_I is the training distribution of points and D^t_I is the test distribution. In particular they fix a particular function and then input into it points that are in the training distribution and points outside of distribution. In distribution is defined as having been very likely to have been seen in training. This is a special case of what we do, since we also shift the test distiribution for the functions sampled. When we looked at cases where D_I \\\\neq D^t_I, we saw that sometimes for input values within distribution, we got bad results (see figure 4 for example) when the function\\u2019s values on those points were outside what we call boundary values for all models tested, whereas Giannou et al. did not find any effects for models with 4 and more layers\\nThird, to test Giannou et al use only very small models 4 ah 64 embeddings and go only to 6 layers, we go up to 12L8AH and d_emb = 256. They observe that after four layers, there is no significant improvement in performance by adding more layers. Those results are different from what we find (maybe due to their use of linear attention). We have included some comments on Giannou et al in a new revised version of the paper that we have submitted on the site, which we hope will reply to your concerns.\"}", "{\"title\": \"reply to reply\", \"comment\": \"Thanks for taking the time to read our response. We very much appreciate that.\", \"just_to_be_clear_about_what_we_think_our_novel_contribution_is\": \"we show that the theoretical reconstructions that prevail in the literature are not what transformers are doing when they icl linear functions. Models do not compute values of a linear function via linear regression, ridge regression, etc; if they did the performance would not show degradation on sequences that are rare. We also observed two kinds of degradation that depend on parameters we call boundary values. As far as we know, we are the first to talk about such values. Finally, the models use the whole sequence and only sequences of a certain length to do the task. We conclude from these observations that the models do not understand the sequence as the plot of a function with parameters to be estimated but rather interpolate values from other similar sequences. The method the models discover is ingenious and works very well when there is enough data; our mathematical model accounts for all of our novel empirical observations.\"}" ] }
CB2r9PwuRQ
CausalESC: Breaking Causal Cycles for Emotional Support Conversations with Temporal Causal HMM
[ "Mingzheng Li", "Xiao Sun", "Zhuoer Zhao", "Feng-Qi Cui", "Jinyang Huang", "Weijie Feng", "Xun Yang", "Zhi Liu", "Meng Wang" ]
Emotional Support Conversation (ESC) is a rapidly advancing task focused on alleviating a seeker's emotional distress. The intricate interplay between cognition, emotion, and behavior presents substantial challenges for existing approaches, which often struggle to capture the dynamic evolution of the seeker's internal state during conversations. To address this, we propose \textbf{CausalESC}, a model designed to dynamically represent the seeker's internal states, by assuming that the generative process governing the mutual influence among these factors follows a first-order Markov property, with \iid random variables. The model comprises a prior network, that disentangles the seeker's emotions, cognition, and behavior, and a posterior network, which decouples the support strategy factors. The prior network also models the psychological causality of the seeker within each conversation round. To account for the varying effects of support strategies on the seeker's intrinsic states, we incorporate a support intervention module to capture these impacts. Additionally, a holistic damping transfer mechanism is designed to regulate the complex interactions among cognition, emotion, behavior, and strategy, ensuring that changes remain within a reasonable range. Our model effectively breaks causal cycles and achieves causal representation learning. Both automatic and human evaluations demonstrate the effectiveness of our model, emphasizing the advantages of modeling the evolution of the seeker's internal state under support strategies.
[ "Emotional Support Conversation", "Causal Learning", "Text Generation" ]
https://openreview.net/pdf?id=CB2r9PwuRQ
https://openreview.net/forum?id=CB2r9PwuRQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yzuHemoITZ", "uQ8fAmY3oE", "MD6uKU4wgl", "7WicOEJF2J", "7QwSmoFy8x" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730778465977, 1732522257743, 1730138446558, 1730774871247, 1730668739898 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2111/Reviewer_BEMh" ], [ "ICLR.cc/2025/Conference/Submission2111/Authors" ], [ "ICLR.cc/2025/Conference/Submission2111/Reviewer_qag5" ], [ "ICLR.cc/2025/Conference/Submission2111/Reviewer_dkMZ" ], [ "ICLR.cc/2025/Conference/Submission2111/Reviewer_hAFe" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes CausalESC, a temporal causal hidden Markov model for emotional support conversations that aims to capture the dynamic evolution of seekers' internal states. The key innovation is modeling the mutual influence between cognition, emotion, and behavior as a first-order Markov process with i.i.d. variables, which breaks potential causal cycles into a directed acyclic graph (DAG). The model consists of three main components: a dialogue floor encoder, a temporal causal hidden Markov module, and a psychocausal hybrid decoder. The authors claim their approach is the first to learn causal representations within causal loops in emotional support conversations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper addresses an important challenge in emotional support conversations by modeling the dynamic nature of psychological states rather than treating them as static snapshots.\\n2. The theoretical foundation drawing from Cognitive Behavioral Therapy (CBT) provides grounding for the model architecture.\\n3. The proposed solution to break causal cycles using temporal unfolding and Markov assumptions is innovative and mathematically sound.\\n4. The model architecture is comprehensive, incorporating multiple relevant components like strategy intervention and holistic damping transfer mechanisms.\", \"weaknesses\": \"1. The evaluation section is missing from the provided content, making it impossible to assess the empirical validity of the claims.\\n2. The mathematical formulation lacks sufficient detail about how the holistic damping transfer mechanism works and how it ensures changes remain within reasonable ranges.\\n3. While the paper claims to be the first to learn causal representations within causal loops, it does not thoroughly discuss or compare with other potential approaches to handling circular causality.\\n4. The paper does not adequately discuss the limitations of the first-order Markov assumption, which may be an oversimplification for complex psychological processes.\\n5. The implementation details of the support intervention module using attention mechanisms need more elaboration on why this particular approach was chosen.\", \"questions\": \"1. How do you justify the i.i.d. assumption for psychological variables that are likely to be highly correlated across time steps?\\n2. What metrics were used to evaluate the \\\"reasonable range\\\" of changes in the holistic damping transfer mechanism? How were these thresholds determined?\\n3. How does the model perform when dealing with long-term dependencies that might violate the first-order Markov assumption?\\n4. Can you provide empirical evidence that the temporal unfolding of causal cycles actually captures the true psychological dynamics better than alternative approaches?\\n5. How computationally intensive is this model compared to existing approaches, given the additional complexity of temporal causal modeling?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper builds on a current research program looking into \\\"Emotional Support Conversation\\\", using chatbots to provide psychological support. It constructs a model that uses several causally connected 'psychological' latent variables that are presumed to relate to the seeker's generation of an utterance and used to generate the next appropriate utterance by the model. The authors also introduce several other knobs to their model, including for example a 'damping' module. The behavior of the model is compared to several existing models using auto-evaluation as well as human experts. In a head-to-head with two other models (BlenderBot-Joint and MISC), the current model is preferred by human experts. The authors also provide a specific test case.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The problem that the authors are overall attempting to address (increase in mental health issues combined with a limited support system) is timely and urgent.\\n\\nThe overall direction of examining latent 'psychological' variables is a welcome addition to the current landscape of support systems. \\n\\nThe work seems technically sound. \\n\\nI appreciate the comparison to multiple other models, and the use of multiple evaluation methods.\", \"weaknesses\": \"This paper has several issues that prevent a recommendation for acceptance. Some of these are not uniquely the issue of this paper in itself, but more the eco-system within which the paper is written (ESC). I'll detail more below, but as one example, the opaque reliance on three 'human experts' to examine such tasks seems especially problematic, but is also used in other accepted papers that this paper relies on. Still, this seems like a negative practice such that precedence is not a good guide for it.\", \"in_more_detail\": [\"While the paper makes general contact with psychology and cognitive science, it is superficial at best. Generally pointing at things like 'cognitive behavioral therapy' or 'emotional regulation' ignores most of the work that has been done to actually test, validate, and model these things. I'm exaggerating to make a point, but the situation is similar to trying to model the physics of a situation while ignoring what we know actually about physics, broadly citing a physics textbook, and then throwing an LLM at it that causally connects in a loop variables labeled things like 'friction', 'momentum', and 'heavy-ness' (some of those are indeed professional terms, some are not, and simply connecting them in a DAG doesn't do much).\", \"Looking at figure 2 for example, how exactly are things like 'emotion, cognition, behavior' actually meant to affect one another? There are huge literatures in computational cognitive science trying to connect these things that go into actual models of them, but here they're all just kind of in a soup together. You would think that 'behavior' would be something like an action variable that is determined by a person's cognitive and emotional state (\\\"I am hungry, there is an apple over there, I will reach for the apple\\\") but then the next state should be determined by the behavior as the actual causal variable, why would the state determine the behavior except through its affect on mental states, unless this is meant to be some sort of habit or instinct pathway?\", \"Continuing that line: The 'damping' module has a specific technical role, keeping the variables within some bound. That's presumably not meant to have a psychological reality, or if it does it is presumably something rather biological. Instead, the authors connect it to 'emotion regulation theory' which has a very specific, tested meaning within psychological and refers to specific strategies people use to either 1) reassess their current situation or 2) tamp down outward behavior expressing their emotion. Instead, the 'damping' is happening over cognition, emotion, behavior broadly without any attempt to model or validate the actual procedure of 'emotion regulation' in the well-accepted meaning of this term. I don't think the authors *should* have to validate that their damping module is in fact implementing something like a re-assessment strategy, there's no reason to expect that it is doing that, but then why call it that and connect it to something it isn't doing? I mention this point on its own but also as example of the overall weakness of the paper in connecting to cognitive/psychological work in general.\", \"The authors claim as one of the main contributions the unfolding of a causal loop into multi-step MDPs, but this is pretty standard in modeling how agents interact with the world using MDPs and latent mental variables (see e.g the work of Chris Baker and Josh Tenenbaum on inverse planning)\", \"I appreciate the comparison to multiple other models but it seems like the authors' model basically misses out on nearly all the comparisons. That is, for almost every metric there seems to be a better fit module, the authors mostly write around this rather than facing it head on.\", \"Probably the most problematic issue in the paper is the actual testing of whether this model does what it is purported to do. In this analysis the authors use 'three experts' but very little detail is provided about this evaluation. Who were these experts? In what way were they experts? What training did they receive and how did they do the evaluation? Were they students from the lab? Colleagues? Naive participants recruited online? Why was the comparison to two models that didn't do that well on the auto-eval instead of the models that did better than the authors model on the auto-eval (e.g. chatGPT)? Presumably if these systems are ever going to be released into the wild the relevant comparison would be to show the dialogs to a large and diverse sample of naive human participants and have them choose the preferred continuation, and not just between models but between models and a natural conversation. Otherwise, the models may do better than one another while existing in a completely closed-loop format that is light-years away from natural dialog.\", \"I appreciate the use of the examples and test cases but they only highlighted just how subjective the nature of 'expertise' judgement can be here. For example, I personally found the suggestion of the system in Figure 1 to be non-supportive and downright missing the pragmatics of the situation (if a friend told me they were worried about losing their job I might think they're trying to find comfort, and if I were to suggest to them 'hmmm, yeah, better update your resume' they might be taken aback, thinking that I thought they could in fact lose their job). As another example, I didn't find the \\\"I think you will be surprised at how intelligent a dog can be\\\" to be some kind of 'affirmative insight' that understands the seeker's 'genuine needs', nor do I find the previous example to be 'superficial affirmation'; this all feels like reading tea-leaves.\"], \"questions\": [\"Can the authors provide significantly more detail about how the human-based evaluation worked?\", \"What would happen to the authors' results and conclusions if instead of '3 experts' the authors used a diverse and large group of naive participants tasked with assessing which continuation they preferred?\", \"What would happen to the authors' results and conclusions if they compared their model using human experts beyond BlenderBot-Joint and MISC, to other models including the ones that did better on the auto-eval, and to human-generated response?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents \\\"CausalESC,\\\" a novel model for Emotional Support Conversation (ESC) tasks, designed to alleviate emotional distress by capturing the evolving internal state of a seeker in conversation. The model proposes a Temporal Causal Hidden Markov Model (HMM) to represent the dynamic interplay of emotions, cognition, and behavior over time, grounded on a first-order Markov assumption. It includes a prior and posterior network that disentangle the seeker\\u2019s internal states and support strategies, alongside a damping transfer mechanism to regulate interactions among these components. Extensive experiments on the ESConv dataset show that CausalESC outperforms state-of-the-art ESC models, demonstrating its ability to provide responsive and supportive dialogue.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Dynamic Representation: By leveraging a Markov property, the model effectively captures the evolution of internal states across conversation rounds, addressing limitations of static models in ESC.\\n2. Interpretability: The model\\u2019s architecture, with distinct modules for cognition, emotion, behavior, and strategy, allows for a clear understanding of how the model interacts with different aspects of the seeker's internal state.\", \"weaknesses\": [\"1. Complexity and Computational Overhead: The model introduces significant complexity, including multiple components like prior and posterior networks, a support intervention module, and a damping transfer mechanism. This could lead to substantial computational costs and may not be feasible in real-time applications.\", \"2. Limited Benchmark Comparison: While the paper compares the model against several ESC baselines, it lacks a more comprehensive comparison with larger models (e.g., fine-tuned ChatGPT or LLaMA) on multiple ESC datasets, which could better contextualize its performance.\", \"3. Grounding problem: How do you ensure that the hidden state corresponds to cognitive state, affective state, behavioral state ....?\", \"4. Unclear writing:\", \"In line 209, what is the issue of circularity, could you offer some examples?\", \"The difference between $z$ and $\\\\epsilon$ is not clearly stated in sec3.3\", \"In line 238, the statements 'the second term' , and 'the third term' should be written more clearly.\", \"In sec 3.3.1, the authors should write more on the holistic stuff, such as how the posterior, the prior ... work together to gain the next generated utterance\", \"In line 270, 'According to .. mechanism', the authors should offer references regarding this statement.\", \"The authors should provide more explanation on the designation of Eq7\", \"Is the psychocausal memory decoder a transformer decoder?\", \"In sec 4, the authors should talk about the training data, as this part is missing through out the paper.\", \"In table 1, the authors should add a row to demonstrate the gap between their work and the SOTA or the second-best model.\", \"In figure 4, what is the strategy factor?\", \"5. Unsatisfying Performance: Table 1 shows that the performance is unsatisfying. Table 3 shows that what line 465-line 468 states is inaccurate, as R-L rises after deleting the CEB causal module.\"], \"questions\": \"As seen above\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"The authors focus on psychological problems, and in sec4.4, human evaluation is involved. The authors should provide whether they have the permission to perform these human evaluation experiments.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces CausalESC, a model designed to improve Emotional Support Conversations (ESC) by dynamically representing a support seeker\\u2019s evolving internal states (cognition, emotion, behavior) and adapting responses accordingly. The model employs a Markov process to handle the causal dependencies among emotions, cognition, and behavior over time. It also comprises a prior and posterior network to disentangle seeker\\u2019s internal states with the support strategy factors. An additional damping transfer mechanism stables the interactions between internal states and strategy. Experimental results show that CausalESC demonstrated improved empathy, fluency, and response quality over baseline models on the ESConv dataset in both automatic and human evaluations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. CausalESC models the evolution of a seeker\\u2019s internal states (emotion, cognition, and behavior) over time, which is a significant improvement over previous methods that often treat these states as static snapshots.\\n2. Unlike some models that use external knowledge bases for empathy and support strategies, CausalESC achieves high performance with its internal mechanism alone. This independence reduces dependencies on external resources and ensures a more generalized approach.\", \"weaknesses\": [\"Some notations are confusing:\", \"The upper subscription $s$: some are used for seekers and others are used for strategy.\", \"In Equation 2, the notation $\\\\le T$ suggests cumulative variables up to time $T$. However, in other places, such as equation3, variables are simply indexed by $t$.\", \"$z_{po}$ and $z_{pr}$ are heavily used without a clear definition.\", \"The comparison with the BlenderBot-based models in Table 1 is not significant. I think the proposed method appears to be compatible, rather than exclusive, with the BlenderBot-based models. It can still incorporate external knowledge into the decoder and achieve better results.\"], \"questions\": \"What is the hybrid schema in Table3?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
CAssIgPN4I
Real2Code: Reconstruct Articulated Objects via Code Generation
[ "Zhao Mandi", "Yijia Weng", "Dominik Bauer", "Shuran Song" ]
We present Real2Code, a novel approach to reconstructing articulated objects via code generation. Given visual observations of an object, we first reconstruct its part geometry using image segmentation and shape completion. We represent these object parts with oriented bounding boxes, from which a fine-tuned large language model (LLM) predicts joint articulation as code. By leveraging pre-trained vision and language models, our approach scales elegantly with the number of articulated parts, and generalizes from synthetic training data to real world objects in unstructured environments. Experimental results demonstrate that Real2Code significantly outperforms the previous state-of-the-art in terms of reconstruction accuracy, and is the first approach to extrapolate beyond objects' structural complexity in the training set, as we show for objects with up to 10 articulated parts. When incorporated with a stereo reconstruction model, Real2Code moreover generalizes to real-world objects, given only a handful of multi-view RGB images and without the need for depth or camera information.
[ "articulated objects", "code generation LLMs", "foundation models" ]
Accept (Poster)
https://openreview.net/pdf?id=CAssIgPN4I
https://openreview.net/forum?id=CAssIgPN4I
ICLR.cc/2025/Conference
2025
{ "note_id": [ "upFktJF0Bj", "u6qU4oAwYb", "rA1qMDdM3S", "lj9uYoRt0e", "hDv4Xrt8R2", "cufRSoWES0", "aAozb9Wutt", "YlUovQh4RT", "Y898hvTU3p", "XmKwyjBSAm", "XLfkyorEpK", "TlVdLV9j31", "QiFMlKEDwy", "PmhXoePqT1", "PC1AcFSbFv", "G20bprSf2x", "4FhwCFO3TT", "0kWfHPKY49" ], "note_type": [ "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1733130830491, 1730715644164, 1737524203064, 1733192646832, 1733208501554, 1733131596484, 1733296111244, 1733193422363, 1733127000718, 1733187352311, 1734078769084, 1733129812312, 1730041636135, 1733128399003, 1733126003272, 1730428899793, 1730725324575, 1731216345888 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12609/Reviewer_S2Gc" ], [ "ICLR.cc/2025/Conference/Submission12609/Reviewer_GDby" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12609/Authors" ], [ "ICLR.cc/2025/Conference/Submission12609/Authors" ], [ "ICLR.cc/2025/Conference/Submission12609/Authors" ], [ "ICLR.cc/2025/Conference/Submission12609/Authors" ], [ "ICLR.cc/2025/Conference/Submission12609/Reviewer_1whT" ], [ "ICLR.cc/2025/Conference/Submission12609/Authors" ], [ "ICLR.cc/2025/Conference/Submission12609/Authors" ], [ "ICLR.cc/2025/Conference/Submission12609/Area_Chair_upEw" ], [ "ICLR.cc/2025/Conference/Submission12609/Authors" ], [ "ICLR.cc/2025/Conference/Submission12609/Reviewer_mGLU" ], [ "ICLR.cc/2025/Conference/Submission12609/Authors" ], [ "ICLR.cc/2025/Conference/Submission12609/Reviewer_mGLU" ], [ "ICLR.cc/2025/Conference/Submission12609/Reviewer_1whT" ], [ "ICLR.cc/2025/Conference/Submission12609/Reviewer_9fCy" ], [ "ICLR.cc/2025/Conference/Submission12609/Reviewer_S2Gc" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the response and clarifying my concerns. I don't have any additional questions.\"}", "{\"summary\": \"The paper focuses on the task of articulated objects reconstruction given only a few images of an object. The pipeline proposed first reconstructs parts from images and then leverages LLM to predict the joint parameters, which generalizes to objects with multiple joints. The method is evaluated on five categories in the PartNet-Mobility dataset and outperforms previous methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a new pipeline Real2Code to reconstruct articulated shapes from images. It shows promising results on multiple categories with different joint types in the PartNet-Mobility dataset.\\n\\n2. I find its generalization ability to multiple-joint shapes particularly interesting, which could potentially enable many real-world robot manipulation tasks.\\n\\n3. The paper is overall easy to read.\", \"weaknesses\": \"1. The proposed pipeline consists of multiple components and, as a result, rather fragile from what I understand, since a failure in any component in the middle can cause the entire pipeline to break down. For example, if the part bounding boxes parameters (segmentation or shape completion) are inaccurate, the joint prediction part will carry these errors. Since the whole procedure is open-loop, I wonder if the method still produces reasonable shapes assuming initial bounding box predictions are inaccurate?\\n\\n2. The method is only evaluated on five categories, and these categories (Box, Refrigerator, Storage-Furniture and Table) are all quite similar in topology, similar for the real-world examples. So I think it would be helpful to see results on more diverse shapes. In addition, is CodeLlama trained on all categories together? How does CodeLlama handle scale differences of different objects / categories? Or all shapes normalized so the input to the LLM is kind of already normalized?\\n\\n3. Which component is the bottleneck of the pipeline? Is it the part segmentation or joint prediction of CodeLlama? Ablation studies on this are essential to better evaluate the approach.\\n\\n4. To better support the claim of being able to extrapolate beyond objects\\u2019 structural complexity in the training set, I think it would be important to provide more results. For example, does the trained model generalize to other categories?\", \"questions\": \"The results presented in the paper are interesting, but I believe that additional evaluations would strengthen the significance and impact of the work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for the detailed reviews and feedback. We hope our additional clarifications, experiments, and the updated submission pdf will address your raised concerns and increase your confidence in accepting our submission, and please let us know if you have any further questions.\\n\\n**Assumption on object joint structure**\", \"our_pipeline_can_be_divided_into_two_parts\": \"1) geometry reconstruction (includes 1.1 part segmentation and 1.2 shape completion); 2) articulation prediction. We\\u2019d like to clarify that our method for 1) can handle arbitrary geometry shapes (e.g. scissors, faucet handles), if these objects are put into the training set for fine-tuning the part segmentation model and shape completion model. However, for 2), because we use the OBB formulation and select OBB edge as rotation center, our method can handle sliding joints (e.g. a sliding oven rack) but will be inaccurate for hinge joints where the joint is not overlapping with any OBB edge (e.g. scissors). To also handle these objects, one possible extension is to add a regression MLP that takes in the OBB information and the LLM\\u2019s selection of rotation direction, then predicts a more precise joint axis. We have updated the submission pdf to better clarify our assumption and the subsequent limitations. Since cuboid-shaped objects (cabinets, doors) are commonly seen, we still believe it\\u2019s of much value to reconstruct those objects, especially those with many parts, which our method handles much better.\\n\\n\\n\\n\\n**Evaluation on more object categories**\\n\\nFollowing the discussion above, we provide additional experiments to further validate (1), i.e. and to test whether Real2Code can handle objects with complex geometries. We collected a new dataset that included two new object categories from PartNet-Mobility - Eyeglasses and Scissors. These two categories have complex shapes beyond simpler, cuboid-like shapes (e.g. cabinets) that we evaluated on in the main paper. To prepare the dataset, the original PartNet object assets require mesh repairing processing, manual cleanup, deduplication (several instances are repeated and got removed to keep the train-test split clean), and render RGBD images and occupancy grids. This results in 52 and 43 training objects for Eyeglasses and Scissors, respectively, a total of 20200 segmentation masks and 14520 part ground-truth meshes. \\n\\nWe use the new data to re-train both our proposed SAM fine-tuning model and shape-completion model as described in the main paper, and test the new models on 4 unseen, held-out object instances from each category. Please see more details and qualitative results here: https://sites.google.com/view/real2code-submission/complex-shapes. These objects are challenging due to having parts with thinner shapes (e.g. blades) and containing more intricate details (e.g. handlebars), and taking up smaller areas in the rendered RGBD images. Qualitatively, we observe the SAM-based segmentation model is able to propose kinematically correct segmentations, but fails at some instances where the glasses legs or scissor blades are too thin.\\n\\nWe have not added more categories for now due to time constraints in data processing and model training (for the Lamp and Globe objects mentioned by the reviewer, the mesh processing for the assets are quite demanding), but hope these results provide sufficient evidence that our method can handle more diverse and complex object geometry.\"}", "{\"comment\": \"Thank you for revisiting the score. We appreciate your time dedicated to reviewing our submission and your feedback that helps us improving this work!\"}", "{\"comment\": \"Dear Reviewer 9fCy\\n\\nThank you for the detailed reviews and feedback. We hope our additional clarifications and the updated submission pdf will address your raised concerns and increase your confidence in accepting our submission. \\n\\n**Weaknesses section**\\n1. Q: Reason for using permutations during evaluation.\", \"a\": \"The model first predicts a parent body (this is the \\u2018root_geom\\u2019 prediction in Figure 4), and every other new joint prediction is connected to the root geom. Note that this is more challenging for objects like multi-drawer/door cabinets, because the LLM needs to infer which of the input OBBs is the root geom based on its sizes and relative orientation, but matters less for two-part objects like laptops.\"}", "{\"title\": \"Summary of Response\", \"comment\": \"Thank you to all reviewers for taking the time to review. The reviewers raised several constructive feedback and questions that lead to insightful discussions. We therefore had hoped to respond with comprehensive answers supported with sufficient experiment results. This took up some time and led to reviewer mGLU lowering the score from 8 to 6. We would appreciate your understanding that the data preparation and model training/evaluating process is time consuming, and sincerely hope the score can be revisited after viewing our updated responses. We have provided clarifications, revisions in the updated submission pdf, and additional experiments. We summarize the main items below.\\n\\n**1. Clarified assumption on object property.**\\n\\nOur pipeline does both 1) geometry reconstruction and 2) articulation prediction. Our method for 1) can handle arbitrary geometry shapes. But for 2), our method can handle sliding joints (e.g. a sliding oven rack) but will be inaccurate for hinge joints where the joint is not overlapping with any OBB edge (e.g. scissors). To handle these objects, one possible extension is to add a regression MLP that takes in the OBB information and the LLM\\u2019s output for rotation direction as input, then predicts a more precise joint axis. \\n\\nWe have updated the submission pdf to better clarify our assumption and the subsequent limitations. Since cuboid-shaped objects (cabinets, doors) are commonly seen, we still believe it\\u2019s of much value to reconstruct those objects, especially those with many parts, which our method handles much better than prior work.\\n\\n\\n**2. Provided additional experiments on objects with more complex shapes.**\\n\\nWe have prepared a new dataset using two new object categories from PartNet-Mobility - Eyeglasses and Scissors, both have complex shapes beyond simpler, cuboid-like shapes (e.g. cabinets) that we evaluated on in the main paper. We use the new data to re-train both our proposed SAM fine-tuning model and shape-completion model, and test on held-out objects. These objects post additional challenges such as thinner shapes (e.g. blades), but we demonstrate that Real2Code can handle them with good output mesh quality. We provide visualizations for all the held-out test objects in this link: https://sites.google.com/view/real2code-submission/complex-shapes \\n\\n**3. Additional experiments on novel object categories** \\n\\nTo provide more insight into the zero-shot generalization ability of Real2Code, we have prepared 8 more test objects from Microwave and Door categories of PartNet (never seen in our training set). Many Microwave or Door objects have similar structural complexity to the StorageFurniture objects we trained on, but contains novel visual appearances and geometries. Therefore, our LLM module can generalize to these objects, but our fine-tuned SAM model and our shape completion model does not handle the OOD instances well.\", \"we_provide_result_visualizations_for_all_the_objects_in_this_link\": \"https://sites.google.com/view/real2code-submission/generalization-to-novel-category\\n\\n**4. Error propagation and main bottleneck of our system**\\n\\nOverall, the main bottleneck of our system is the part-level segmentation quality, because as long as a valid part gets segmented out into a reasonable point-cloud, 1) the shape completion model is trained with noisy input such that it can complete the input; 2) it will result in a reasonable OBB as input to the LLM, which is essentially copying over one of the input edges as a joint axis. This is further evidenced by our additional experiment results above. \\n\\nIn the updated submission pdf, we have added a Discussion & Limitation section and ensure this limitation is well disclosed.\\nTo further improve generalization, an interesting future direction is using SAMv2 to obtain more robust segmentations, or using large-scale pre-trained 3D generative models to complete the object shape. We provide a demo illustration of incorporating SAMv2 at this link: https://sites.google.com/view/real2code-submission/dataset-details \\n\\n**5. Comparison to a more recent baseline**\\n\\nWe were asked to compare against DigitalTwinArt [1], a concurrent work that became available after our submission. We selected five multi-part objects with 2 or more moving parts for evaluation, and evaluate each object on 3 different seeds (i.e. a separate optimization run per seed), and report the average performance across 3 runs. We provide quantitative results and visualization of test objects in this link: https://sites.google.com/view/real2code-submission/comparison-with-digitaltwinart \\nAlbeit not a full comparison, the results provide clear evidence that although DigitalTwinArt reported better performance than PARIS, it still struggles with objects with more than 1 moving parts and overall under-performs Real2Code. \\n\\n[1] Yijia Weng, Bowen Wen, etc. Neural implicit representation for building digital twins of unknown articulated objects.\"}", "{\"comment\": \"Thank you for the response and the additional qualitative results. They have addressed my concerns, and I am raising my score to 6.\"}", "{\"comment\": \"Dear reviewer#mGLU,\\n\\nApologies for the late response, we were still in progress of gathering experimental results to better respond to your comments (e.g. running additional experiments on novel category objects, evaluating the cited method [1], which were time-consuming). We are updating with a better-formatted full response here. We hope you could consider changing back the score.\\n\\n_Weaknesses section_\\n\\n**W1 Performance on new or unseen categories**\\n\\nWe provide additional experiment results, which are divided into two sets\\n1. Evaluation on smaller objects with complex shapes. We prepared data for two new object categories from PartNet-Mobility - Eyeglasses and Scissors. Both categories have complex shapes beyond simpler, cuboid-like shapes. We use the new dataset to re-train both our proposed SAM fine-tuning model and shape-completion model as described in the main paper, and test the new models on 8 unseen, held-out instances.\", \"please_see_this_link\": \"https://sites.google.com/view/real2code-submission/complex-shapes for more details and result visualizations. These objects post additional challenges due to having parts with thinner shapes (e.g. blades) and containing more intricate details (e.g. handlebars), and taking up smaller areas in the rendered RGBD images. Qualitatively, we observe the SAM-based segmentation model is able to propose kinematically correct segmentations, but fails at some instances where the glasses legs or scissor blades are too thin.\\n\\n2. Generalization to novel object categories. \\n\\nWe additionally added 8 more test objects from Microwave and Door categories of PartNet. These two categories were never seen in our training set. Many Microwave or Door objects have similar structural complexity to the StorageFurniture objects we trained on (e.g. a microwave also has a box-like parent body and a hinge door), but contains novel visual appearances and new geometries unseen from the training set. Therefore, our LLM module can generalize to these objects, but our fine-tuned SAM model and our shape completion model does not handle the OOD geometries well (e.g. the press buttons in a microwave dial panel). We have provided the additional details and results on these generalization experiments at link: https://sites.google.com/view/real2code-submission/generalization-to-novel-category \\n\\n\\n**W2 regarding error propagation**\\n\\nThis is indeed a limitation of our method. Overall, the main bottleneck of our system is the part-level segmentation quality, because as long as a valid part gets segmented out into a reasonable point-cloud, 1) the shape completion model is trained with noisy input such that it can complete the input; 2) it will result in a reasonable OBB as input to the LLM, which is essentially copying over one of the input edges as a joint axis. In the updated submission pdf, we have added a Discussion & Limitation section to ensure this limitation is well disclosed. \\n\\nTo further improve generalization, an interesting future direction would be involving SAMv2 to obtain more robust segmentations, or using large-scale pre-trained 3D generative models to complete the object shape. See: https://sites.google.com/view/real2code-submission/dataset-details for an illustrated demo of incorporating SAMv2\\n\\n\\n_Questions section_\\n\\n**Q1 regarding comparison with [1]**:\\n\\nDue to limited space, please see the other comment below for more details.\\n\\n**Q2 on cross-category generalization**:\", \"following_the_discussion_above_and_results_here\": \"https://sites.google.com/view/real2code-submission/generalization-to-novel-category We observe that, again, 2D part segmentation is the main bottleneck of our pipeline (our model's mesh completion from GT segmentation has clearly better quality), and completely fails for more extreme OOD instances where the object looks very different from training set (e.g. the glass door ID 9107).\\n\\n**Q3 on training & inference time**:\\n\\nTraining time for SAM fine-tuning, shape completion model training, and LLM fine-tuning takes approximately 24hrs, 12hrs, and 10hrs respectively. At inference time: 1) SAM prompting takes up approximately 3min per object because of our view-consistent prompting scheme, but the inference time is still kept reasonable because we cache the image embedding from SAM and only call the light-weight prompt decoder for later prompt points; 2) shape-completion model forwarding is single-pass, which can be batched and takes <1min for all parts; 3) LLM generation takes ~2min per object, because the output is strictly formatted to be concise, it requires only ~200 output token length. \\n\\n**Q4 on extracting oriented bounding box**: we first use the multi-view RGBD images or the Dust3r-output point maps to get segmented part-level point clouds, then extract OBB using open3d (`open3d.geometry.OrientedBoundingBox`). \\n\\n\\nThank you for the detailed reviews and feedback. and please let us know if you have further questions.\", \"title\": \"updated full response\"}", "{\"comment\": [\"Thank you for the detailed reviews and feedback. We hope our additional clarification, experiments, and the updated submission pdf will address your raised concerns and increase your confidence in accepting our submission, and please let us know if you have any further questions.\", \"**Question 1 & 3: Fragile to failures in specific components in the pipeline, and system bottleneck**\", \"Error propagation is a valid limitation of our method. Overall, the main bottleneck of our system is the part-level segmentation quality, because as long as a valid part gets segmented out into a reasonable point-cloud, 1) the shape completion model is trained with noisy input such that it can complete the input; 2) it will result in a reasonable OBB as input to the LLM which is essentially copying over one of the input edges as a joint axis.\", \"This can be better illustration by the qualitative results in here: https://sites.google.com/view/real2code-submission/complex-shapes We observe when give ground-truth part-level segmentations, the shape completion model produces high quality mesh reconstructions, and the resulting OBBs are naturally well fitted and oriented. However, when there are sufficient errors in some of the segmented views, we see the segmented point clouds having \\u201cleakage\\u201d between object parts. See Eyeglasses object#101303: The resulting mesh is inflated because the input point cloud on the glasses part contains inaccurate points on the legs, hence the model receives a bad normalized input pcd (recall that we normalize all the part point clouds using partially-observed OBB during inference).\", \"In the updated submission pdf, we have added a Discussion & Limitation section and ensure this limitation is well disclosed (see Appendix section A.1.3). And we will update the submission with these additional results to provide more insights. However, there is also several possibilities to increase the robustness of our part segmentation module: 1) provide additional user prompt points, this will allow the model to focus more on the correct parts; 2) incorporate the latest SAMv2 model that segments videos. We can concatenate multi-view inputs into a video, first run our fine-tuned model to propose kinematically-accurate parts, then run SAMv2 to obtain view-consistent masks. We provide a demo of this process in the video here: https://sites.google.com/view/real2code-submission/dataset-details\", \"**Q2: Results on more object categories.**\", \"We have added additional experiments to evaluate our method on new object categories, please see qualitative results here: https://sites.google.com/view/real2code-submission/complex-shapes\", \"We prepared a new dataset using two new object categories from PartNet-Mobility - Eyeglasses and Scissors, both have complex shapes beyond simpler, cuboid-like shapes (e.g. cabinets) that we evaluated on in the main paper. We use the new data to re-train both our proposed SAM fine-tuning model and shape-completion model as described in the main paper, and test the new models on unseen, held-out object instances. Qualitatively, we observe the SAM-based segmentation model is able to propose kinematically correct segmentations, but fails at some instances where the glasses legs or scissor blades are too thin.\", \"Regarding questions on LLM training details: 1) yes, LLM trained on all categories together. 2) the objects across categories are normalized to the same scale in size, subsequently CodeLlama inputs are also normalized. See https://sites.google.com/view/real2code-submission/dataset-details for a mesh visualization of a Laptop object and a Box object, which we show are already on a matching scale.\", \"**Q4: Generalization to other categories:**\", \"Take the example of Microwave objects from PartNet. Microwave is an unseen category not included in our training set, objects here have similar structural complexity to the StorageFurniture objects we trained on (a box-like parent body and a hinge door), but contains novel visual appearances and new geometries unseen from the training set. Therefore, our LLM module can generalize to these objects, our fine-tuned SAM model sometimes fails on object instances that have OOD visuals, and our shape completion model does not handle the geometries well (e.g. the press buttons in the dial panel).\", \"Overall, because of our OBB formulation, the LLM can generalize reasonably within similar structures to our training set. In terms of geometry reconstruction, the generalization is bounded by both the visual appearances and diversity in shape completion model training data. To further improve generalization, an interesting future direction would be involving SAMv2 to obtain more robust segmentations, or using large-scale pre-trained 3D generative models to complete the object shape. We have provided the additional details and results on these generalization experiments at link: https://sites.google.com/view/real2code-submission/generalization-to-novel-category\"]}", "{\"metareview\": \"This paper presents Real2Code, a novel approach for reconstructing articulated objects from visual observations by generating code, using fine-tuned LLMs specialized for this task. The writing is clear, the method is innovative, and the results are convincing and demonstrate the effectiveness of the proposed work. One limitation of the method is the dependence on SAM's segmentation capabilities. Particularly, Reviewer mGLU raises the concern about \\u201cthe fine-tuned SAM's weak generalization performance, which struggles even with synthetic objects. The assumption of perfect segmentation in practical applications is unrealistic and substantially limits the system's scalability.\\u201d Nevertheless, this is a good paper that should be presented at the conference.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal has addressed most of the reviewer concerns. Reviewer mGLU has a remaining concern that was not addressed by the reviewers (see meta-review), which is not critical for the acceptance of the paper, but which should be discussed by the authors in the limitations section.\"}", "{\"comment\": \"Dear Reviewer S2Gc,\\n\\nThank you for the detailed reviews and feedback. We hope the following additional clarifications, additional results, and the updated submission pdf will address your raised concerns. \\n\\n- Q1: Why is PARIS reported performance lower than the original paper.\\nPARIS jointly optimizes object parts and the motion model based on a single rendering objective. As a result, their optimization is unstable and has large performance variances across different trials. While PARIS reports their best results across multiple trials in their paper, for fairness we report their average performance across 5 trials with random initializations. \\n\\n- Q2: how to handle more complex-shaped objects.\", \"a\": \"Our inference-time compute requirement is indeed larger than end-to-end methods like CARTO. To be specific: during SAM-prompting phase, we sample in 3D then projecting each 3D point onto 2D points across each single RGB image, hence the number of SAM forward pass scales linearly with the number of input camera views, however, we can cache the image embedding from SAM for every RGB image, and only call the light-weight prompt decoder for additional prompt points. Additionally, the inference compute for LLM code generation is dependent on the number of object parts, and roughly scales linearly with the generation token length. The shape-completion model forwarding is single-pass, which can be batched and takes <1min for all parts.\\nOverall, our system is slower at inference time, but we deem it an affordable price to pay since this formulation can handle arbitrary numbers of object parts, and takes advantage of the strong generalization ability from pre-trained SAM models. CARTO-like model is single-shot on objects with one joint, hence would require either modifying the architecture for model output head, or running multi-round interaction to handle more object parts. We have updated the A.1 Discussion & Limitation section to better discuss these timing constraints of our method (see A.1.2), and also added the missing citation for CenterSnap. \\n \\n\\n- Q4: Details on pre-training datasets & code availability.\\nPlease see Appendix 4.2 + 4.3 for more details on dataset preparation and model training details. We use a subset of PartNet-Mobility object assets and did our own RGB rendering and code conversion from raw URDF to OBB-relative MJCF files. Please let us know if you have further detailed questions. We will open-source code upon paper acceptance.\"}", "{\"summary\": \"This paper presents Real2Code, a novel approach for reconstructing articulated objects from visual observations.\", \"main_contributions\": [\"1. A new method (Real2Code) that reconstructs articulated objects by generating code, using fine-tuned LLMs specialized for this task.\", \"2. A part reconstruction pipeline that combines:\", \"Kinematic-aware view-consistent part segmentation model.\", \"3D shape completion model.\", \"Fine-tuned LLMs to predict joint articulation.\", \"3. Significant performance improvements over previous methods:\", \"First approach to accurately handle objects with more than three parts\", \"Generalizes beyond training data (trained on up to 7 parts, works on up to 10 parts)\", \"Works with just a few RGB images, without requiring depth or camera information\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The writing is clear and easy to follow.\\n2. The proposed pipeline innovatively formulates the articulation reconstruction as code generation, which naturally combine current powerful foundation models (SAM, LLM) for articulated object reconstruction.\\n3. Real2Code demonstrates significant performance improvements over previous methods. It accurately handle objects with more than three parts and only requires RGB images without requiring depth or camera information.\\n4. This paper provides details for the training of the whole pipeline, including data preparation and training of key components (SAM, completion model, and CodeLlama), which demonstrates good reproducibility and technical soundness.\", \"weaknesses\": \"1. Real2Code demonstrates good performance on trained categories (Laptop, Box, Refrigerator, Storage-Furniture, and Table). However, the performance of unseen categories is not explored. I don't expect the model to generalize well to all other categories, but I do expect some experiments to show whether there is still a problem with category generalization.\\n2. There is no discussion of when the model will fail, especially if some of the components of the model fail.\\u00a0\\u00a0For example, fine-tuned SAM might not segment parts accurately, or the LLM might output an incorrect result under certain circumstances.\\n\\n\\nThese weaknesses don't necessarily diminish the paper's contribution but addressing them would strengthen the work and increase its impact. Many could be addressed through additional experiments and analysis rather than fundamental changes to the method.\", \"questions\": \"1. Why is the method from [1] not included in the baseline comparisons, given that it demonstrates better performance than PARIS?\\n2. How is the cross-category generalization ability of Real2Code, especially on real-world data?\\n3. How long do the training and inference of the entire pipeline take respectively\\uff1f\\n4. How to extract oriented bounding box of each part?\\n\\n[1] Yijia Weng, Bowen Wen, etc. Neural implicit representation for building digital twins of unknown articulated objects.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Additional results from evaluating DigitalTwinArt [1]\", \"comment\": \"DigitalTwinArt [1] is a concurrent work which we did not have access to evaluation code during the time of submission. Due to limited time, we selected five multi-part objects with 2 or more moving parts for evaluation. To provide a fair evaluation as we did for PARIS baseline, we evaluate each object on 3 different seeds (i.e. a separate optimization run per seed), and report the average performance across 3 runs. Please see the updated submission website for the result table containing averaged shape reconstruction and joint prediction results, as well as visualizations of the evaluated objects. https://sites.google.com/view/real2code-submission\\n\\n\\nAlbeit not a full comparison with all the test objects reported in the main paper, these results should provide sufficient evidence that although DigitalTwinArt reported better performance than our compared baseline method PARIS and is significantly more stable to optimize (i.e. results from different runs show a lower variance), it still struggles with objects with more than 1 moving parts and overall under-performs Real2Code. \\n\\n[1] Yijia Weng, Bowen Wen, etc. Neural implicit representation for building digital twins of unknown articulated objects.\"}", "{\"comment\": \"The authors have not made any response to my concerns and questions as well as those of other reviewers. Considering my concerns and those of other reviewers, I reduce the score to 6.\"}", "{\"summary\": \"This paper formulates joint prediction as a code-generation problem and adapts LLM to this task, which makes it scale elegantly to process an articulated object with multiple joints. It also introduces a part reconstruction pipeline leveraging 2D part segmentation and part-level shape completion.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Formulating joint prediction as a code-generation problem provides an elegant way to handle varying numbers of object joints.\", \"Part-level shape completion makes sense since part structures are much simpler than structures of whole objects. Table 1 demonstrates the effectiveness of the proposed shape completion model.\"], \"weaknesses\": [\"The selection of object categories for evaluation is limited.\", \"For part-level shape completion, it would be more compelling to include categories with a greater diversity of part shapes rather than focusing primarily on cuboid-like forms. For instance, objects such as globes and lamps in PartNet Mobility exhibit a variety of shapes, including spherical and cylindrical forms, which provide a more comprehensive basis for evaluation. Additionally, the assumption that 'many common articulated objects consist of cuboid-like parts' is not fully substantiated when considering the full range of object categories in PartNet-Mobility.\", \"In articulation prediction, the formulation assumes that 'the position of corresponding revolute joints will lie closely to, if not overlap with, one of the OBB edges'. However, this assumption seems not to be solid enough either. Take \\u201cfolding chairs\\u201d in PartNet-Mobility for example, the revolute joints of many instances lie not close enough to OBB edges (quadrisection point or even trisection point). Do these assumptions restrict the range of categories suitable for evaluation?\"], \"questions\": \"Why were only these five object categories chosen from PartNet-Mobility for evaluation? The current formulation relies on assumptions that appear somewhat unsubstantiated. Is this why Real2Code is hard to evaluate in more diverse categories?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Real2Code, a method for reconstructing articulated objects from multi-view images through code generation. The method first reconstructs part geometry using image segmentation and shape completion. Then it predicts joint information as code generation from fine-tuned LLM which takes an object part as oriented bounding boxes. Experiments show that this method outperforms previous method in generating parts with over three parts and can generalize to real object reconstruction by training only on synthetic data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This method formulates joint prediction as a code generation problem, which is different from prior work. The biggest advantage of such a formulation is the ability to scale well with different numbers of parts (prior work works mostly for objects with <=3 parts).\\n\\n2. The overall pipeline is novel -- it leverages a few different modules including Vision models for part segmentation and completion as well as LLM for code generation. This way, the problem is decomposed into a few smaller steps which is shown solvable with previous methods.\\n\\n3. The text and figures are overall well-written and easy to follow.\\n\\n4. Experiments have been conducted to validate each proposed components. Results seem to achieve state of the art, especially on objects with many parts.\", \"weaknesses\": \"1. In Sec. 4.2.1, it mentions that \\\"we generate permutations of the set of predicted meshes and take the permutation that results in lowest error; the same logic is used for joint prediction results\\\". I was wondering why this is needed to evaluate this method. Is it because the proposed method is not very stable? How much more time would this cost for the inference of this method?\\n\\n2. The link to more visualizations included in Sec. 4.4 does not contain any result visualizations -- it seems it only has a method overview figure and an abstract.\\n\\n3. The content in Tab. 1 is a bit confusing:\\n(1) what is ``Real2Code+gtSeg``, the paper does not seem to mention / analyze this row anywhere in the text.\\n\\n(2) If I understand ``Real2Code+gtSeg`` the same way as ``Real2Code+gtBB`` in Tab. 2, it should be an upper bound of ``Real2Code (Ours)``, if so, why does ``Real2Code+gtSeg`` perform worse than ``Real2Code (Ours)`` in a few columns like Whole & Parts for Box, etc.\", \"questions\": \"1. Tab. 3 and its corresponding text has some typos: row 2 has \\\"Rot\\\" in out column, but it is referred to as \\\"Rel\\\" in the text (if I understand it correctly).\\n\\n2. In Tab. 3, the first row has 0 error for \\\"rot\\\" on 3, 4-5, 6-15 parts. Then why the rot error suddenly becomes very big for 2 parts?\\n\\n3. How do you determine the parent vs. child node / the canonical pose, especially for real-world objects? For example, the two parts of a laptop have very similar geometries/OBB. If a laptop is placed upside down, would this method also instead treats the keyboard part is child and the screen part as parent?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper reconstructs articulated objects from visual observations. The approach utilizes a modular pipeline which first reconstructs part level geometry from segmentation and then uses a codegen LLM to combine the individual parts into an articulate assembled model to be executed in simulation. The paper compares to relevant recent baselines and demonstrate strong improvement. The approach also scales well to increasing number of joints due to its modular approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"In my opinion, below are the strengths of the paper:\\n\\n1. The paper scales well to increased number of joints. This has been a major limitation of preceding works and this work address it nicely with a modular approach i.e, part level reconstruction and code-gen integration for the subsequent steps. \\n\\n2. Strong quantitative improvement numbers compared to recent state-of-the-art baselines, especially for increasing number of parts. \\n\\n3. The presentation of the paper in nice and paper writing is easy to follow.\", \"weaknesses\": \"I have some question to the authors. In my opinion, below are the paper's weaknesses:\\n\\n1. Why does the PARIS baseline struggle a lot? even for 2-part case? Did the authors try to tune their method? Based on the PARIS results' from the paper, it looks like it should reasonably work well for a simpler 2 part setting?\\n\\n2. Despite good qualitative results, why are the resutls only shown on simpler objects like cupboards and laptop? Does the method work for varied articulated objects like scissors, stapler etc? Is this an inherent limitation of their method they only work for a subset of articulated objects for which they ahve a prior? If yes, that should be clearly stated as other baselines seems to work for more complicated articulated objects as well?\\n\\n3. What is the timing result of the method? Some of the baselines mentioned i.e. CARTO, follow-up from CenterSnap [1] are very fast and don't require manual SAM prompting i.e. single-shot. This is not discussed very well in the related works. \\n\\n4. I didn't find rigorous details on pretraining datasets for shape completion as well as datasets used for finetuning code llama. Those should be helpful to include. Also do authors plan to open-source their code? It looks like that will be helpful as well for the community to build up on?\\n\\n[1] Irshad et al. CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation\", \"questions\": \"Please see the weakness section above for clarification questions. I look forward to seeing them in the rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
CAgIwCbnQI
Learning with Preserving for Continual Multitask Learning
[ "Siwoo Bae", "Hanchen David Wang", "Zirong Chen", "Meiyi Ma" ]
Artificial Intelligence (AI) drives advancements across fields, enabling capabilities previously unattainable. Modern intelligent systems integrate increasingly specialized tasks, such as improving tumor classification with tissue recognition or advancing driving assistance with lane detection. Typically, new tasks are addressed by training single-task models or re-training multitask models, which becomes impractical when prior data is unavailable or new data is limited. This paper introduces Continual Multitask Learning (CMTL), a novel problem category critical for future intelligent systems yet overlooked in current research. CMTL presents unique challenges beyond the scope of traditional Continual Learning (CL) and Multitask Learning (MTL). To address these challenges, we propose Learning with Preserving (LwP), a novel approach for CMTL that retains previously learned knowledge while supporting diverse tasks. LwP employs a Dynamically Weighted Distance Preservation loss function to maintain representation integrity, enabling learning across tasks without a replay buffer. We extensively evaluate LwP on three benchmark datasets across two modalities—inertial measurement units of multivariate time series data for quality of exercises assessment and image datasets. Results demonstrate that LwP outperforms existing continual learning baselines, effectively mitigates catastrophic forgetting, and highlights its robustness and generalizability in CMTL scenarios.
[ "continual learning", "continual multitask learning", "representation learning", "knowledge distillation" ]
Reject
https://openreview.net/pdf?id=CAgIwCbnQI
https://openreview.net/forum?id=CAgIwCbnQI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yizaXrT6Gv", "vBSivrhw4L", "ti2ZWhdji5", "nWVCI1mduC", "nNFPNYDsmw", "lTyymJwgN7", "iUfjBDFYlL", "h2IXfavfzD", "gKRH5TgEV4", "gHjtWSLP6S", "g4QZf2tvLf", "e8wEi8alAT", "ZBABHyuese", "UoX8DPbFxf", "S5K6gBGvXY", "PhghQ2b0MM", "OKxt8Sx5Fp", "KqQe72V4S5", "IBsC3RvXlb", "Dz7tJp89om", "DnuAKL65Rq", "DaUJBokJI5" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1737524142319, 1732127469482, 1732127626025, 1732550584708, 1731771964667, 1732550356411, 1730408084225, 1732550415569, 1733169332004, 1731771830318, 1732695428826, 1733173466839, 1732550550315, 1731772091942, 1732550662782, 1734689800810, 1731771884881, 1730854754948, 1732550626616, 1730611108338, 1730582371821, 1731771935092 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Reviewer_KdBT" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Reviewer_KdBT" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Area_Chair_K7gh" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Reviewer_qi2x" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ], [ "ICLR.cc/2025/Conference/Submission11726/Reviewer_minZ" ], [ "ICLR.cc/2025/Conference/Submission11726/Reviewer_9JHi" ], [ "ICLR.cc/2025/Conference/Submission11726/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Regarding Weakness 3 on baselines\", \"comment\": \"Dear Reviewer qi2x,\\n\\nThank you for your comments regarding the baselines. In response, we conducted an additional review of state-of-the-art methods to ensure a comprehensive comparative analysis. Below, we provide further clarification on the scope of our work and the rationale behind our choice of baselines.\\n\\nOur focus lies on lightweight, un-pretrained models, as our work emphasizes theoretical contributions and practical applications across different modalities without reliance on pretrained foundational models or large-scale architectures. Such models are typically unavailable for continual multitask learning in diverse modalities (e.g., IMU sensing, as applied in this paper). \\n\\nTo address your feedback, we categorized existing works into two groups: \\n\\n1. **Theoretical advancements for performance enhancement without external data (online method).** \\n In this category, we incorporated *Dual View Consistency (DVC)* [Gu et al., 2022] and *Online Bias Correction (OBC)* [Chrysakis and Moens, 2023] into our comparative analysis. DVC and OBC represent a recent and relevant approach in online class-incremental continual learning, strengthening the validation of our method. \\n\\n We have updated the manuscript to include the results of DVC and OBS in the comparative tables and figures. As shown below, our proposed method, *LwP*, consistently outperforms DVC and OBS: \\n\\n | **Method** | **CelebA (10 tasks)** | **PhysiQ (3 tasks)** | **Fairface (3 tasks)** | \\n |------------------|-----------------------|------------------------|--------------------------| \\n | **DVC** | 71.441 \\u00b1 7.640 | 85.100 \\u00b1 10.381 | 63.848 \\u00b1 3.193 | \\n | **OBC** | 70.829 \\u00b1 8.267 | 83.999 \\u00b1 11.377 | 63.872 \\u00b1 3.449 | \\n | **LwP (Ours)** | **73.484 \\u00b1 8.019** | **88.242 \\u00b1 12.010** | **66.482 \\u00b1 3.138** | \\n\\n This addition demonstrates the competitiveness of our approach, even against a strong and recently proposed state-of-the-art baseline. We thank the reviewer for highlighting this point and have revised the manuscript accordingly.\\n\\n2. **Works on continual learning that fall outside the scope of our approach (offline).** \\n While several continual learning approaches exist in the literature, many are not directly comparable to our method due to differences in assumptions or objectives. For instance, pretrained models are currently beyond the scope of our work, as our emphasis is on developing approaches that operate without large-scale external data or pretrained foundations. However, we recognize the potential compatibility of pretrained models with our framework and will explore this integration in future extensions. \\n Though there are several other works on Continual Learning [Zhang et al., 2023] and [ Wang et al., 2024] in the past several years, but they are not compatible to be compared with our approach because they are operating on a large scale of external data or pretrained foundations. While these models are currently outside the scope of our work, we acknowledge their potential compatibility with our framework and may explore how to integrate this in future extensions.\\n\\nWe believe the baselines included in our paper are sufficient, as they encompass a diverse set of algorithmic methods, including approaches focused on replay buffers [Buzzega et al., 2020], latent alignment (e.g., GSS) [Aljundi et al., 2019], and feature disentanglement regularization (FDR) [Benjamin et al., 2018], and are well-aligned with the continual multitask learning domain.\\n\\n**Reference**\\n\\nGu, Yanan, et al. \\\"Not just selection, but exploration: Online class-incremental continual learning via dual view consistency.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.* 2022.\\n\\nChrysakis, Aristotelis, and Marie-Francine Moens. \\\"Online bias correction for task-free continual learning.\\\" ICLR 2023 at OpenReview (2023).\\n\\nZhang, Gengwei, et al. \\\"Slca: Slow learner with classifier alignment for continual learning on a pre-trained model.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\nWang, Liyuan, et al. \\\"Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality.\\\" Advances in Neural Information Processing Systems 36 (2024). \\n\\nAljundi, Rahaf, et al. \\\"Gradient based sample selection for online continual learning.\\\" Advances in neural information processing systems 32 (2019).\\n\\nBenjamin, Ari S., David Rolnick, and Konrad Kording. \\\"Measuring and regularizing networks in function space.\\\" arXiv preprint arXiv:1805.08289 (2018).\\n\\nBuzzega, Pietro, et al. \\\"Dark experience for general continual learning: a strong, simple baseline.\\\" Advances in neural information processing systems 33 (2020): 15920-15930.\"}", "{\"title\": \"Regarding Weakness 3 on baselines\", \"comment\": \"Dear Reviewer minZ,\\n\\nThank you for your feedback and concern regarding the baselines. In response, we revisited state-of-the-art methods to ensure a comprehensive comparative analysis. Below, we clarify the scope of our work and the rationale for selecting the baselines.\\n\\nOur study focuses on lightweight, non-pretrained models to emphasize theoretical contributions and practical applications across diverse modalities. Pretrained models are typically unavailable or unsuitable for continual multitask learning in diverse modalities like IMU sensing, which is one focus of this work.\\n\\nTo address your concerns, we categorized existing works into two groups:\\n\\n1. **Theoretical advancements for performance enhancement without external data (online method).** \\n In this category, we incorporated *Dual View Consistency (DVC)* [Gu et al., 2022] and *Online Bias Correction (OBC)* [Chrysakis and Moens, 2023] into our comparative analysis. DVC and OBC represent a recent and relevant approach in online class-incremental continual learning, strengthening the validation of our method. \\n\\n We have updated the manuscript to include the results of DVC and OBS in the comparative tables and figures. As shown below, our proposed method, *LwP*, consistently outperforms DVC and OBS: \\n\\n | **Method** | **CelebA (10 tasks)** | **PhysiQ (3 tasks)** | **Fairface (3 tasks)** | \\n |------------------|-----------------------|------------------------|--------------------------| \\n | **DVC** | 71.441 \\u00b1 7.640 | 85.100 \\u00b1 10.381 | 63.848 \\u00b1 3.193 | \\n | **OBC** | 70.829 \\u00b1 8.267 | 83.999 \\u00b1 11.377 | 63.872 \\u00b1 3.449 | \\n | **LwP (Ours)** | **73.484 \\u00b1 8.019** | **88.242 \\u00b1 12.010** | **66.482 \\u00b1 3.138** | \\n\\n This addition demonstrates the competitiveness of our approach, even against a strong and recently proposed state-of-the-art baseline. We thank the reviewer for highlighting this point and have revised the manuscript accordingly.\\n\\n2. **Works on continual learning that fall outside the scope of our approach (offline).** \\n While several continual learning approaches exist in the literature, many are not directly comparable to our method due to differences in assumptions or objectives. For instance, pretrained models are currently beyond the scope of our work, as our emphasis is on developing approaches that operate without large-scale external data or pretrained foundations. However, we recognize the potential compatibility of pretrained models with our framework and will explore this integration in future extensions. \\n Though there are several other works on Continual Learning [Zhang et al., 2023] and [ Wang et al., 2024] in the past several years, but they are not compatible to be compared with our approach because they are operating on a large scale of external data or pretrained foundations. While these models are currently outside the scope of our work, we acknowledge their potential compatibility with our framework and may explore how to integrate this in future extensions.\\n\\nWe believe the baselines included in our paper are sufficient, as they encompass a diverse set of algorithmic methods, including approaches focused on replay buffers [Buzzega et al., 2020], latent alignment (e.g., GSS) [Aljundi et al., 2019], and feature disentanglement regularization (FDR) [Benjamin et al., 2018], and are well-aligned with the continual multitask learning domain.\\n\\n**Reference**\\n\\nGu, Yanan, et al. \\\"Not just selection, but exploration: Online class-incremental continual learning via dual view consistency.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.* 2022.\\n\\nChrysakis, Aristotelis, and Marie-Francine Moens. \\\"Online bias correction for task-free continual learning.\\\" ICLR 2023 at OpenReview (2023).\\n\\nZhang, Gengwei, et al. \\\"Slca: Slow learner with classifier alignment for continual learning on a pre-trained model.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\nWang, Liyuan, et al. \\\"Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality.\\\" Advances in Neural Information Processing Systems 36 (2024). \\n\\nAljundi, Rahaf, et al. \\\"Gradient based sample selection for online continual learning.\\\" Advances in neural information processing systems 32 (2019).\\n\\nBenjamin, Ari S., David Rolnick, and Konrad Kording. \\\"Measuring and regularizing networks in function space.\\\" arXiv preprint arXiv:1805.08289 (2018).\\n\\nBuzzega, Pietro, et al. \\\"Dark experience for general continual learning: a strong, simple baseline.\\\" Advances in neural information processing systems 33 (2020): 15920-15930.\"}", "{\"title\": \"To Verify if Our Responses Have Addressed Your Concerns and Express Our Gratitude\", \"comment\": \"Dear Reviewer,\\n\\nWe deeply value the time and effort you have dedicated to reviewing our paper and providing insightful suggestions. As the discussion phase is coming to an end and no further author-reviewer interactions are planned, we would like to confirm if our responses from this and a few days ago have successfully addressed your concerns. We hope we have resolved the issues raised. However, if there are any points that require further clarification or additional concerns you would like us to address, please feel free to reach out. We remain fully committed to continuing our discussion with you.\\n\\nBest regards.\"}", "{\"comment\": \"Continue to our previous comment\\n\\n**Response to Weakness 4:**\\n\\nWe appreciate your suggestion to explore the performance of the model when continuously learning additional tasks beyond the initial base tasks. This is an important scenario to evaluate the scalability and robustness of our approach. We will conduct further experiments in this direction and include the results in the revised paper to provide a more comprehensive validation of our method.\\n\\n**Response to Weakness 5:**\\n\\nThank you for your valuable feedback. We agree that integrating the theoretical insights on the extension to learning problems more seamlessly into the main body would improve the paper's readability and coherence. In the revised manuscript, we will restructure the content to incorporate this section into the main text, ensuring a smoother narrative flow and better alignment with the rest of the paper.\\n\\n**References:**\\n\\nBuzzega, Pietro, et al. \\\"Dark experience for general continual learning: a strong, simple baseline.\\\" Advances in Neural Information Processing Systems 33 (2020): 15920-15930.\\n\\nFostiropoulos, Iordanis, Jiaye Zhu, and Laurent Itti. \\\"Batch model consolidation: A multi-task model consolidation framework.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\nKim, Sanghwan, et al. \\\"Achieving a better stability-plasticity trade-off via auxiliary networks in continual learning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\nHyuntak Cha, Jaeho Lee, and Jinwoo Shin. Co2L: Contrastive Continual Learning, June 2021. URL http://arxiv.org/abs/2106.14413. arXiv:2106.14413 [cs].\\n\\nWonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho. Relational Knowledge Distillation, May 2019. URL http://arxiv.org/abs/1904.05068. arXiv:1904.05068 [cs].\\n\\nShunjie Han, Cao Qubo, and Han Meng. Parameter selection in svm with rbf kernel function. In World Automation Congress 2012, pp. 1\\u20134. IEEE, 2012.\"}", "{\"title\": \"Regarding Concerns of Model Size and Image Resolution and Ablation Study\", \"comment\": \"### Response to the concern of the performance of model size and image resolution.\\n\\nWe have expanded our experiments to assess the applicability of LwP in bigger models and higher-resolution of the dataset, CelebA. We have conducted experiments using ResNet-50 and ResNet-101 models on 64x64 and 224x224 resolution datasets. But ViT was omitted as the architecture demands large samples for effective training and is relatively much more sensitive to optimizer hyperparameters compared to ResNets. The results are included in the appendix to provide a more comprehensive validation of our approach.\\n\\n| **Model** | **ResNet50 (32\\u00d732)** | **ResNet101 (32\\u00d732)** | **ResNet50 (224\\u00d7224)** |\\n|--------------|----------------------|-----------------------|-------------------------|\\n| LwF | 59.277 \\u00b1 11.920 | 58.279 \\u00b1 11.202 | 60.012 \\u00b1 14.448 |\\n| oEWC | 66.975 \\u00b1 10.110 | 67.159 \\u00b1 10.506 | 68.511 \\u00b1 13.352 |\\n| ER | 65.335 \\u00b1 9.298 | 65.646 \\u00b1 8.784 | 65.973 \\u00b1 14.729 |\\n| SI | 66.698 \\u00b1 10.030 | 67.456 \\u00b1 9.880 | 67.747 \\u00b1 13.754 |\\n| GSS | 65.926 \\u00b1 13.120 | 65.587 \\u00b1 13.142 | 69.817 \\u00b1 18.771 |\\n| FDR | 61.753 \\u00b1 11.943 | 61.720 \\u00b1 12.017 | 65.225 \\u00b1 15.545 |\\n| DER | 62.105 \\u00b1 12.114 | 63.797 \\u00b1 10.774 | 69.859 \\u00b1 12.690 |\\n| DERPP | 62.814 \\u00b1 11.071 | 62.957 \\u00b1 11.577 | 68.102 \\u00b1 13.557 |\\n| DVC | 67.084 \\u00b1 10.380 | 65.340 \\u00b1 11.427 | 70.921 \\u00b1 13.823 |\\n| OBC | 64.220 \\u00b1 11.237 | 66.058 \\u00b1 10.370 | 69.319 \\u00b1 13.607 |\\n| **LwP (Ours)** | **67.388 \\u00b1 11.125** | **69.432 \\u00b1 10.416** | **85.064 \\u00b1 5.388** |\\n\\nThis demonstrates the competitiveness of our approach. We have included these results in the revised manuscript to provide a more comprehensive evaluation of LwP's performance in continual multitask learning scenarios in the Appendix D.6. We thank the reviewer again for highlighting this point.\\n\\n### Response to the Ablation Study\\nTo strengthen our work and provide additional clarity, we have moved the ablation study from the appendix to the main paper. For further clarification, the results are also presented here for reference.\\n\\n\\n| **Method on PhysiQ** | **LwP ($L^2$)** | **LwP (Cosine)** | **LwP (RBF)** | **IRD (Co2L)** | **RKD** |\\n|----------------------------|----------------------|-----------------------|-----------------------|----------------------|--------------------|\\n| **Dynamic Weighting** | **88.2 \\u00b1 12.0** | 85.4 \\u00b1 13.1 | 84.5 \\u00b1 13.7 | 86.4 \\u00b1 11.5 | 85.1 \\u00b1 13.3 |\\n| **W/o Dynamic Weighting** | 86.0 \\u00b1 12.3 | 84.1 \\u00b1 14.4 | 84.8 \\u00b1 14.5 | 79.9 \\u00b1 17.1 | 85.9 \\u00b1 11.9 |\\n\\nThe ablation study demonstrates that LwP using $L^2$ with dynamic weighting outperforms other variations and baselines. We have included these results in the revised manuscript to provide a more comprehensive evaluation of LwP's performance in continual multitask learning scenarios in Section 4.6. We thank the reviewer for highlighting this point.\"}", "{\"summary\": \"This paper introduces Learning with Preserving (LwP), a novel approach to continual multitask learning (CMTL) that addresses limitations in traditional continual and multitask learning methods by preserving previously learned knowledge across diverse tasks. LwP employs a Dynamically Weighted Distance Preservation (DWDP) loss function, which maintains representation integrity for both prior and future tasks without relying on a replay buffer.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The idea is good - multi-task continual learning and is an essential problem in the space of continual learning.\\n2. The dynamic weighting is an interesting method, but a little ensure if pair-wise comparison is optimal when the dataset is big.\\n3. Adequate sets of experiments across various metrics.\", \"weaknesses\": \"1. Why does the authors consider three separate datasets and not a combination of them? The latter would be more representative of real-world scenarios.\", \"eg\": \"first 3 tasks CelebA, next 3 tasks PhysiQ and so on, which is more representative of a realistic scenario.\\n2. How is Fig 2 visualized? what exactly it is meant to represent? Is this is a conceptual diagram, a visualization of actual data?\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Regarding Experiment of MTL to CL\", \"comment\": \"### Response to the concern of the performance of a model when continuously learning five tasks in the presence of five base tasks.\\n\\nWe have conducted additional experiments to assess the applicability of LwP in continual multitask learning scenarios where the model continuously learns additional tasks after initially learning five tasks. The model was tested on five base tasks and five additional tasks in a multitask learning manner on the CelebA dataset. The results are presented in the table below:\\n\\n| **Model** | **ResNet18 (64\\u00d764)** |\\n|--------------|-----------------------|\\n| LwF | 74.057 \\u00b1 11.364 |\\n| oEWC | 82.250 \\u00b1 6.362 |\\n| ER | 77.245 \\u00b1 8.434 |\\n| SI | 82.194 \\u00b1 6.460 |\\n| GSS | 80.563 \\u00b1 8.239 |\\n| FDR | 81.271 \\u00b1 7.738 |\\n| DER | 81.010 \\u00b1 8.674 |\\n| DERPP | 78.177 \\u00b1 9.532 |\\n| DVC | 81.387 \\u00b1 7.821 |\\n| OBC | 80.516 \\u00b1 8.446 |\\n| **LwP (Ours)** | **83.652 \\u00b1 7.069** |\\n\\nThe table presents the performance of various methods when continuously learning five tasks in the presence of five base tasks using ResNet18 with a 64\\u00d764 resolution. We show that LwP outperforms the other methods, demonstrating its effectiveness in handling continual multitask learning scenarios. These results have been included in the revised manuscript to provide a more comprehensive evaluation of LwP's performance in continual multitask learning scenarios in Appendix D.7, along with a diagram illustrating the experiment in a different perspective. We thank the reviewer for highlighting this point.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThank you for reviewing our submission and providing constructive feedback. We have carefully addressed your comments in the rebuttal and believe these revisions help clarify the contributions and resolve potential misunderstandings. If our responses address your concerns, we kindly ask you to reconsider your evaluation. Should there be any remaining questions, we are happy to engage in further discussion and provide clarification before the Dec. 3rd deadline.\\n\\nBest regards,\\n\\nAuthor(s)\"}", "{\"comment\": \"**Response to Weakness 1:**\\n\\nThank you for your thoughtful feedback.\\n\\nOur goal is to leverage the continual multitask learning (CMTL) framework to develop a generalized representation space that consistently outperforms single-task learning (STL) and other baseline methods.\\n\\nIn one of our evaluation settings, we treat each subset of the dataset, defined by a specific label, as a separate task. While there is no data distribution shift in this setup, the key challenge lies in effectively identifying the current feature embeddings and aligning them with the requirements of new tasks, ensuring seamless integration and preservation of prior knowledge.\\n\\n**Response to Weakness 2:**\\n\\nCMTL assumes that data for different tasks are sampled from the same overall distribution but are not necessarily the same data points. This reflects real-world scenarios where different tasks involve related data domains without requiring repeated labeling of the exact same data.\\n\\nFor example, consider datasets collected from U.S. roads for different purposes: one for pedestrian detection and another for lane marking detection, collected sequentially. While the specific images may differ, they share common characteristics due to being from the same environment. Similarly, in the medical domain, the same MRI scan could be labeled sequentially by different professionals. A primary care physician might annotate the scan from one perspective, while a radiologist provides a secondary annotation with a specialized focus. This process mirrors scenarios where tasks involve related but distinct labeling objectives, contributing to a more comprehensive understanding of the data [Freeman et al., 2021].\\n\\nIn our experiments, we ensured that the data for each task was unique to that task, reflecting practical scenarios like the examples above. We will revise the paper to better articulate this point and avoid potential misunderstandings.\\n\\n**Response to Weakness 3:**\\n\\nThank you for your feedback regarding the methods used for comparison. We understand the importance of evaluating our approach against current and relevant baselines, and we did our best to utilize well-established methods. In our study, we included Dark Experience Replay (DER) and its enhanced version, DER++, as part of our comparative analysis [Buzzega et al., 2020], as they are publicly available and well-documented in the literature. These methods are not only widely used in continual learning scenarios but have also been utilized as strong baselines in other settings, as demonstrated in recent works [Fostiropoulos et al., 2023; Kim et al., 2023]. This reinforces their relevance to our comparative analysis.\\n\\nDER and DER++ are robust baselines in continual learning scenarios, offering strong performance and relevance to our research setting. By incorporating these methods, we ensured a comprehensive and up-to-date evaluation of our approach. If you have specific methods in mind that could further strengthen our comparative analysis, we are open to including them to provide a more thorough and well-rounded evaluation\\n\\n**Response to Weakness 4:**\\n\\nWe appreciate your concern about hyperparameter tuning. In our experiments, we used the same loss weights (\\u03bb_c, \\u03bb_o, \\u03bb_d) across all three datasets. This consistency demonstrates that our method is robust and does not require extensive hyperparameter adjustments for different tasks or datasets.\\n\\nAdditionally, we followed the parameter usage design outlined in the original papers, ensuring that our methodology is grounded in well-established practices. We will revise the paper to emphasize the stability, practicality, and reliability of our approach, further highlighting this important aspect of our work.\\n\\n**References:**\\n\\nFreeman, Beverly, et al. \\\"Iterative quality control strategies for expert medical image labeling.\\\" Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. Vol. 9. 2021.\\n\\nBuzzega, Pietro, et al. \\\"Dark experience for general continual learning: a strong, simple baseline.\\\" Advances in Neural Information Processing Systems 33 (2020): 15920-15930.\\n\\nFostiropoulos, Iordanis, Jiaye Zhu, and Laurent Itti. \\\"Batch model consolidation: A multi-task model consolidation framework.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\nKim, Sanghwan, et al. \\\"Achieving a better stability-plasticity trade-off via auxiliary networks in continual learning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\"}", "{\"comment\": [\"We appreciate the feedback provided by the reviewers and the Area Chair, and we are grateful for the opportunity to clarify key aspects of our work. Unfortunately, we haven't received any reviewer's response, thus we are writing to summarize a major clarification and our paper revision.\", \"To start with, we appreciate all the reviewers\\u2019 recognition of our work's novelty in new problem formulation and solution. We believe there may be some misunderstandings regarding the scope and novelty of our proposed framework from some of the reviewers, particularly concerning the assumptions of data distribution drift. To clarify, our paper introduces Continual Multitask Learning (CMTL) as a new problem category focusing on label space iteration, where tasks arrive sequentially with distinct labels applied to a consistent input distribution. The misunderstanding appears to stem from the assumption that CMTL must address data distribution drift. However, the proposed CMTL setting deliberately focuses on a scenario where tasks are introduced via label space iteration over a fixed input distribution. Unlike traditional continual learning, which often assumes distribution drift, our framework specifically addresses challenges like catastrophic forgetting and task interference within the label space. This is a realistic assumption in applications like medical imaging or autonomous systems, where the data distribution remains stable while new iterations of data or annotations are incrementally added. Handling data distribution drift falls outside the scope of this work but could be a promising direction for future research.\", \"We emphasize that the reviewer's concern about data distribution drift does not apply to our work, as the proposed setting intentionally focuses on tasks with a fixed input distribution. This simplification enables us to tackle unique challenges in CMTL, such as preserving shared representations across sequential label spaces. Moreover, most reviewers have recognized the novelty of our approach, including the dynamically weighted preservation loss to effectively retains knowledge. The following components are incorporated into the revised manuscript:\", \"We have revised the introduction to better articulate the CMTL setting and its practical implications to avoid potential misunderstandings.\", \"We have revised the caption and description of Figure 2 to provide a clear explanation of its components.\", \"The theoretical perspective on learning problems and the ablation study have been integrated into the main body.\", \"We have also fixed typos and minor errors that were present in the paper.\", \"Also, we have included following additional experiments in the manuscript:\", \"We have incorporated recent and relevant continual learning methods into our comparative analysis.\", \"Dual View Consistency (DVC) [Gu et al., 2022]\", \"Online Bias Correction (OBC) [Chrysakis and Moens, 2023]\", \"We have conducted experiments using larger models and higher-resolution datasets (see ***Appendix D.6***).\", \"We have expanded our experiments to assess the applicability of LwP in different task sequence scenarios. The model was tested on scenarios where it continuously learns additional tasks after initially learning five tasks in a multitask learning manner (see ***Appendix D.7***)\", \"**References**\", \"Chrysakis, A., & Moens, M.-F. (2023). Online bias correction for task-free continual learning. *International Conference on Learning Representations (ICLR)*.\", \"Gu, Y., Wang, Y., Wu, Z., Herrmann, C., & Herrmann, J. M. (2022). Not just selection, but exploration: Online class-incremental continual learning via dual view consistency. *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*.\"]}", "{\"comment\": \"Thank you for your clarification. I would like to hold to my rating of 6 (Marginal accept)\"}", "{\"title\": \"To Verify if Our Responses Have Addressed Your Concerns and Express Our Gratitude\", \"comment\": \"Dear Reviewer,\\n\\nWe deeply value the time and effort you have dedicated to reviewing our paper and providing insightful suggestions. As the discussion phase is coming to an end and no further author-reviewer interactions are planned, we would like to confirm if our responses from this and a few days ago have successfully addressed your concerns. We hope we have resolved the issues raised. However, if there are any points that require further clarification or additional concerns you would like us to address, please feel free to reach out. We remain fully committed to continuing our discussion with you.\\n\\nBest regards.\"}", "{\"comment\": \"**Response to Weakness 1:**\\n\\nWe appreciate your suggestion to consider combining datasets, as it could represent more complex and realistic scenarios. However, our focus on individual datasets aligns with the assumptions of the CMTL setting, where the input data distribution remains consistent across tasks and the labels are independent.\\n\\nCombining datasets like CelebA (image data) and PhysiQ (IMU sensory data) would introduce significant shifts not only in the input distribution but also in the data modality, which falls outside the scope of our current study. Our method is specifically designed to perform optimally under the condition of a consistent input distribution within a single modality, with allowances for minor deviations.\\n\\nAddressing scenarios that combine multiple modalities, or heterogeneous data would require extending the current framework to handle such complexity. This represents an interesting direction for future work, where our approach could be adapted to more general continual learning scenarios involving multimodal datasets.\\n\\n**Response to Weakness 2:**\\n\\nThank you for bringing this to our attention. Figure 2 is a conceptual diagram designed to illustrate the framework of our proposed LwP method. The images depicted are examples from the datasets (e.g., cancer tissues) and are included to convey the idea that the model is continuously learning different attributes (tasks) over time.\\n\\nWe acknowledge that the caption and explanation of Figure 2 could be clearer. In the revised manuscript, we will provide a more detailed description to clarify its purpose and ensure it effectively communicates the intended concept to the readers.\"}", "{\"title\": \"To Verify if Our Responses Have Addressed Your Concerns and Express Our Gratitude\", \"comment\": \"Dear Reviewer,\\n\\nWe deeply value the time and effort you have dedicated to reviewing our paper and providing insightful suggestions. As the discussion phase is coming to an end and no further author-reviewer interactions are planned, we would like to confirm if our responses from this and a few days ago have successfully addressed your concerns. We hope we have resolved the issues raised. However, if there are any points that require further clarification or additional concerns you would like us to address, please feel free to reach out. We remain fully committed to continuing our discussion with you.\\n\\nBest regards.\"}", "{\"metareview\": \"This paper received mixed reviews. The reviewers recognized the well designed method for the proposed continual multi-task learning setting, its strong performance in the setting, and extensive experiments. However, they disagreed on the value of the proposed problem setting and benchmarks: Two reviewers appreciated the value of the problem setting (minZ, KdBT), while the others argued that the problem setting as a variation of existing ones and thus pointed out incremental novelty (qi2x, 9JHi). Moreover, three reviewers considered that the benchmarks do not well reflect real-world application scenarios due to due to the lack of distribution shift, small-scale datasets, and outdated model architectures (qi2x, 9JHi, KdBT). Beside these issues, the reviewers raised concerns with lack of comparisons with latest work (qi2x, minZ), potential sensitivity to hyperparameters (qi2x), lack of theoretical foundation of the proposed loss function (minZ), and missing essential ablation study on the dynamic weighting (9JHi).\\n\\nThe authors' rebuttal and subsequent responses in the discussion period address some of these concerns but failed to fully assuage all of them. In particular, after the discussion period, Reviewer qi2x still pointed out the issues on the problem settings and lack of comparisons with latest work. Also, the AC found that the concerns with limitations of the benchmarks have been only partially resolved as the authors did not present additional datasets or incorporate the reviewers' suggestion, although Reviewer 9JHi did not come back and Reviewer KdBT gave a positive score in the end. Further, the AC sees that the rebuttal did not successfully address some of the remaining concerns like sensitivity to hyperparameters (quantitative analysis required) and the lack of theoretical foundation (only empirical analysis results added).\\n\\nPutting these together, the AC considers that the remaining concerns outweigh the positive comments and the rebuttal, and thus regrets to recommend rejection. The authors are encouraged to revise the paper with the comments by the reviewers and the AC, and submit to an upcoming conference.\", \"additional_comments_on_reviewer_discussion\": [\"The rebuttal failed to assuage major concerns of the reviewers, and thus two reviewers voted to reject; one of the negative reviewers did not come back but the AC found that his/her concerns are not fully resolved by the rebuttal and revision. Below the major concerns of the reviewers and how they are addressed are summarized.\", \"**The proposed problem setting and benchmarks do not well reflect real world application scenarios (qi2x, 9JHi, KdBT)**: *The AC weighed this issue very heavily when making the final decision.* Reviewer qi2x considered the proposed problem setting, i.e., continual multi-task learning (CMTL), as a simplified variant of existing continual learning settings due to the absence of distribution shift, and thus believes the setting does not reflect real-world application scenarios; Reviewer KdBT also left almost the same comment, the reviewer was positive though. Reviewer 9JHi pointed out that the proposed benchmarks are limited due to the use of small-scale datasets and outdated model architectures. The authors failed to fully assuage these concerns. They did not provide additional experiments with distribution shifts, but only reiterated the significance of the problem setting mentioned already in the paper. Also, additional experiments in the rebuttal failed to address the concerns with the benchmarks, in particular their scales and reality, since the experiments was conducted on one of the datasets already used in the paper. For these reasons, the value of the CMTL setting does not look significant to this AC. In particular, it is unclear what is its key difference from class incremental learning as the benchmarks are still about classification, i.e., classification of different types of attributes, which however seems to be interpreted as a variant of class incremental learning. It would be nice if the authors demonstrate more task variations to reflect more realistic application scenarios illustrated in the rebuttal and revision.\", \"**Lack of comparisons with latest work (qi2x, minZ)**: The AC feels this issue has been well addressed by additional results in the rebuttal, and Reviewer minZ was also satisfied. However, Reviewer qi2x did not; the reviewer wanted comparisons with additional state-of-the-art methods. *The AC did not weigh this issue heavily when making the final decision* since the last comment by Reviewer qi2x does not suggest any specific example of such methods to be compared.\", \"**Potential sensitivity to hyperparameters (qi2x)**: The reviewer said his/her concern on this issue has been resolved, but the AC does not agree. Using the same set of hyperparameter values for all the three datasets is of course desirable, but still it could be difficult to find such a combination of hyperparameter values that works best for all the three datasets, especially if the performance is sensitive to the hyperparameters. To better address this issue, the authors should provide detailed quantitative analysis results, e.g., accuracy vs. hyperparametere values.\", \"**Lack of theoretical foundation of the proposed loss (minZ)**: The AC sees this issue has not been fully addressed since the authors did not provide theoretical foundation but instead presented ablation study on the loss function. However, the reviewer seems to be satisfied as he/she raised the score to 6, and thus *the AC did not weigh this issue heavily when making the final decision*\", \"**Missing essential ablation study on the dynamic weighting (9JHi)**: Additional results in the rebuttal and revision successfully resolved this issue.\", \"**Misc**: The AC found that the most negative reviewer is the most confident and the most experienced in the continual learning field among the four reviewers. Also, the quality of writing should be improved substantially to meet the standard of ICLR and other top-tier ML conferences.\"]}", "{\"comment\": \"**Response to Weakness 1:**\\n\\nIn the CMTL setting, we focus on scenarios where the labels represent independent attributes of the same input domain shared across time. For example, using the CelebA dataset, one-third of the data may focus on learning one attribute (e.g., \\\"smiling\\\"), another third on a different attribute (e.g., \\\"wearing glasses\\\"), and the final third on yet another attribute (e.g., \\\"gender\\\"). Importantly, these new label tasks can also apply to data from previous tasks, making this setting distinct.\\n\\nThis introduces unique challenges that differ from earlier works like LwF. While LwF typically addresses scenarios involving mutually exclusive labels or data distribution shifts, CMTL operates under the stricter condition of consistent input distributions across tasks. This requires models to effectively utilize shared input distributions while handling independent labels, a setting where existing methods often struggle to outperform single-task learning baselines.\\n\\nOur contribution lies in addressing this gap with a specialized approach tailored to the CMTL setting, enabling effective knowledge preservation and multitask learning under these more realistic and challenging conditions.\\n\\n\\n**Response to Weakness 2:**\\n\\nThank you for raising this important point. Our experiments currently cover datasets with varying scales\\u2014CelebA with 10 tasks at 32\\u00d732 resolution, FairFace at standard resolutions using ResNet-18, and PhysiQ as a time-series benchmark. We agree that testing our method on higher-resolution images and larger models would provide additional evidence of its effectiveness.\\n\\nSince our approach is not constrained by the architecture, as long as there is a representation space before the output layer, we are confident it can generalize to other models like ViT . To strengthen our validation and demonstrate the scalability and robustness of our method in more demanding settings, we will conduct further experiments with larger models and higher-resolution datasets and include these results in the revised paper.\\n\\n**Response to Weakness 3:**\\n\\nThank you for pointing out the importance of further validating the contribution of dynamic weighting. We have addressed this aspect in our ablation study (Appendix E.5). To briefly summarize, we evaluated the impact of our proposed loss function by selectively disabling the dynamic weighting feature and comparing it with other structure-preserving loss functions. The baselines included in our assessment are CO2L [Cha et al., 2021], RKD [Park et al., 2019], cosine similarity, and the RBF kernel [Han et al., 2012].\\n\\nThe results, presented in Table 3, show that the loss function with both dynamic weighting and Euclidean distance consistently outperforms these alternatives. This highlights the importance of each component in achieving optimal performance. We believe that the superior performance of Euclidean distance with dynamic weighting is due to its unnormalized nature across batches, unlike previously proposed methods.\\n\\nWe will ensure this point is emphasized more clearly in the revised manuscript to underline the contribution of dynamic weighting.\\n\\n**Reference:**\\n\\nHyuntak Cha, Jaeho Lee, and Jinwoo Shin. Co2L: Contrastive Continual Learning, June 2021. URL http://arxiv.org/abs/2106.14413. arXiv:2106.14413 [cs].\\n\\nWonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho. Relational Knowledge Distillation, May 2019. URL http://arxiv.org/abs/1904.05068. arXiv:1904.05068 [cs].\\n\\nShunjie Han, Cao Qubo, and Han Meng. Parameter selection in svm with rbf kernel function. In World Automation Congress 2012, pp. 1\\u20134. IEEE, 2012.\"}", "{\"summary\": \"This paper introduces a new problem setting called Continual Multitask Learning (CMTL) and proposes a novel method called Learning with Preserving (LwP) to address it. CMTL is defined as a scenario where a model needs to learn multiple different tasks sequentially, with input data coming from the same distribution but each task having distinct label spaces. The proposed LwP method aims to preserve previously learned knowledge in the shared representation space without requiring a replay buffer of old data. It uses a novel Dynamically Weighted Distance Preservation (DWDP) loss to maintain the integrity of representations. Extensive experiments demonstrate LwP's strong performance and generalization abilities in CMTL scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. A new continual learning setting is introduced.\\n2. A method tailored to the new setting is designed.\", \"weaknesses\": \"1. Compared to general CL scenarios, such as CIL, DIL (domainincremental learning), TIL (task incremental learning), the proposed CMTL setting can indeed be seen as an idealized simplified version. CMTL lets the input data come from the same distribution, which means that all tasks are performed on the same data domain, without considering the case where the data distribution drifts over time. In real-world applications, the data distribution of subsequent tasks may differ from the previous ones.\\n2. It is difficult to imagine how this setting could be implemented in reality. In actual scenarios, it might only be achievable by repeatedly labeling the same set of data with new labels. Even updating the data slightly would likely change its domain distribution\\n3. The methods used for comparison are somehow out of date.\\n4.The performance of LwP likely depends on careful tuning of the loss weights (\\u03bbc, \\u03bbo, \\u03bbd).\", \"questions\": \"Please refer to my comments in the 'Weakness' session.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"To Verify if Our Responses Have Addressed Your Concerns and Express Our Gratitude\", \"comment\": \"Dear Reviewer,\\n\\nWe deeply value the time and effort you have dedicated to reviewing our paper and providing insightful suggestions. As the discussion phase is coming to an end and no further author-reviewer interactions are planned, we would like to confirm if our responses from this and a few days ago have successfully addressed your concerns. We hope we have resolved the issues raised. However, if there are any points that require further clarification or additional concerns you would like us to address, please feel free to reach out. We remain fully committed to continuing our discussion with you.\\n\\nBest regards.\"}", "{\"summary\": \"This paper introduces Learning with Preserving (LwP), a novel framework designed for Continual Multitask Learning (CMTL), which involves learning different tasks sequentially while preserving shared representations. The paper evaluates LwP on three benchmark datasets across two modalities, demonstrating its competitive performance compared to existing continual learning methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper proposes a new scenario of continual learning, CMTL, highlighting its unique challenges and significance in practical applications.\\n2.\\tThe LwP framework is innovative in preserving previously learned knowledge in a way that remains applicable and beneficial across diverse tasks.\\n3.\\tThe experimental results suggest that LwP demonstrates competitive performance compared to existing continual learning methods.\", \"weaknesses\": \"1.\\tHow does the proposed method address the fundamental challenges in continual learning, such as catastrophic forgetting or the stability-plasticity dilemma?\\n2.\\tThe Dynamically Weighted Distance Preservation (DWDP) loss is an innovative contribution. However, it would be valuable to delve deeper into the theoretical foundations of DWDP, exploring its relationship to other distance-preserving techniques and providing additional insights into why it is effective for preserving implicit knowledge.\\n3.\\tA point of concern is that, continuous learning methods in the comparison experiment are not state-of-the-art, and therefore may not effectively substantiate the validity of the method proposed in this paper.\\n4.\\tFurther exploration is needed for more experimental settings, such as investigating the performance of a model when continuously learning five tasks in the presence of five base tasks.\\n5.\\tThe section on the extension to learning problems (pages 19-20) provides a valuable insight into the theoretical underpinnings of LwP, but it could be integrated more seamlessly into the main body of the paper to enhance its readability and coherence.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper aims to address the continual multi task problem. The paper proposes a LwP loss in addiction to current loss and loss to preserve old preditions. LwP tries to preserve the knowledge in the implicit knowlege space. The paper also propose to masks the loss on LwP if the labels are different and in that case it is not nessory to have this preserving loss.\\n\\nThe paper then goes to evaulate this approch on various small scale benchmarks, and specilay on image datasets, it shows a clear gains over previous approches. The paper also shows the BWT metric for all the continual learning methods, and t-sne plots for the latent space.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Performance on all the benchmarks are impressive. Figure 5 clearly shows the minimal loss in performance in the previous tasks as the learning progress.\\n\\nthe benifits of Learning with Preserving (LwP) loss as a regulaization is very solid, and can be seen on the figure 5 and table 1, and compared to other appoches LwP performs considerably well. \\n\\nthe evaulation is done with a good coverage, with 3 vision benchmarks, and show the distributions of these latents in t-sne plots. the paper also measures the backward transfer values of the continual learning methods.\", \"weaknesses\": \"It is not clear, how this CMTL problem is novel, it is same as in early LwF papers, and the paper claims this is one of the contibutions. please adress this in the rebuttal.\\n\\nwhile the results are impressive, i am bit scaptical on the scale of the datasets, all have been trained on smaller scale and low resolution. would be nice to show some results on larger resolution images and models. Also would be nice to show that this approch can work for other archituctres like vit. I belive it should work without any problems. I still think resnet 18 is too small model in the current landscape to validate anything concretely.\\n\\nAlso there is not enough ablations to varify the contibutions of dynamic weighting, that would be helpful to validate this claim.\", \"questions\": \"please look at my strengths and weakness sections, and if you can adress the weakness section, i am happy to change my ratings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Response to Weakness 1:**\\n\\nOur proposed method addresses catastrophic forgetting in the CMTL setting through the Dynamically Weighted Distance Preservation (DWDP) loss. This loss function is aimed to preserves approximate solutions for any problems that can be defined in z by leveraging the universality of kernel machines with the Gaussian kernel as approximators (see Sec. 3.2). By maintaining the integrity of the shared representation space, the DWDP loss enables the model to retain valuable knowledge from prior tasks while effectively learning new ones. \\n\\nMoreover, our approach inherently balances the stability-plasticity dilemma. By employing dynamic weights, it prioritizes the acquisition of new information without destabilizing previously acquired knowledge. This ensures a harmonious trade-off between stability (preserving past knowledge) and plasticity (adapting to new knowledge), which is critical for success in continual learning scenarios.\\n\\n**Response to Weakness 2:**\\n\\nWe agree that a deeper theoretical exploration of the DWDP loss would enhance the paper. While our work provides initial analysis and ablation studies (see Appendix E.5) comparing DWDP with other distance-preserving techniques, we found that using the unnormalized Euclidean distance effectively preserves the global structure of the representation space, resulting in improved performance. To briefly summarize, we evaluated the impact of our proposed loss function by selectively disabling the dynamic weighting feature and comparing it with other structure-preserving loss functions. The baselines included in our assessment are CO2L [Cha et al., 2021], RKD [Park et al., 2019], cosine similarity, and the RBF kernel [Han et al., 2012].\\n\\nWe hypothesize that this is due to the unnormalized distances capturing absolute relationships between representations more accurately. To strengthen the paper, we will expand the theoretical discussion in the revised version. \\n\\n**Response to Weakness 3:**\\n\\nThank you for your feedback regarding the methods used for comparison. We understand the importance of evaluating our approach against current and relevant baselines, and we did our best to utilize well-established methods. In our study, we included Dark Experience Replay (DER) and its enhanced version, DER++, as part of our comparative analysis [Buzzega et al., 2020], as they are publicly available and well-documented in the literature. These methods are not only widely used in continual learning scenarios but have also been utilized as strong baselines in other settings, as demonstrated in recent works [Fostiropoulos et al., 2023; Kim et al., 2023]. This reinforces their relevance to our comparative analysis.\\n\\nDER and DER++ are robust baselines in continual learning scenarios, offering strong performance and relevance to our research setting. By incorporating these methods, we ensured a comprehensive and up-to-date evaluation of our approach.\\n\\nIf you have specific methods in mind that could further strengthen our comparative analysis, we are open to including them to provide a more thorough and well-rounded evaluation.\"}" ] }
CA06Nqa7CG
Utilitarian Algorithm Configuration for Infinite Parameter Spaces
[ "Devon R. Graham", "Kevin Leyton-Brown" ]
Utilitarian algorithm configuration is a general-purpose technique for automatically searching the parameter space of a given algorithm to optimize its performance, as measured by a given utility function, on a given set of inputs. Recently introduced utilitarian configuration procedures offer optimality guarantees about the returned parameterization while provably adapting to the hardness of the underlying problem. However, the applicability of these approaches is severely limited by the fact that they only search a finite, relatively small set of parameters. They cannot effectively search the configuration space of algorithms with continuous or uncountable parameters. In this paper we introduce a new procedure, which we dub COUP (Continuous, Optimistic Utilitarian Procrastination). COUP is designed to search infinite parameter spaces efficiently to find good configurations quickly. Furthermore, COUP maintains the theoretical benefits of previous utilitarian configuration procedures when applied to finite parameter spaces but is significantly faster, both provably and experimentally.
[ "Algorithm configuration", "Utilitarian algorithm configuration", "bandits" ]
Accept (Poster)
https://openreview.net/pdf?id=CA06Nqa7CG
https://openreview.net/forum?id=CA06Nqa7CG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "quWUTNTWeV", "gwySUE8S25", "fC9Ga4hrF9", "cgXOhW7vx4", "asabF0xBCM", "WQmPXrnVNK", "SqMWdIVpjZ", "QGLyD9Fvbd", "PgJR4wj65v", "K4XQ3Zay0P", "IMgLwmESQv", "Bn38vQSZ2h", "BhDLXfdskt", "8CDQjMbJRf", "85rvfJAGsp", "5wX2RMq3hI", "2nrVQ9J0eD" ], "note_type": [ "decision", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1737524034322, 1734464737967, 1732195898873, 1732196249743, 1730716528091, 1732195864176, 1732313162878, 1730754441952, 1732196181918, 1732196305517, 1732196122011, 1730753895733, 1732196229006, 1730627902553, 1732264837910, 1730721834674, 1731019422847 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10225/Authors" ], [ "ICLR.cc/2025/Conference/Submission10225/Authors" ], [ "ICLR.cc/2025/Conference/Submission10225/Reviewer_rtTp" ], [ "ICLR.cc/2025/Conference/Submission10225/Authors" ], [ "ICLR.cc/2025/Conference/Submission10225/Reviewer_a4rf" ], [ "ICLR.cc/2025/Conference/Submission10225/Reviewer_Fgpz" ], [ "ICLR.cc/2025/Conference/Submission10225/Authors" ], [ "ICLR.cc/2025/Conference/Submission10225/Authors" ], [ "ICLR.cc/2025/Conference/Submission10225/Authors" ], [ "ICLR.cc/2025/Conference/Submission10225/Reviewer_67Uh" ], [ "ICLR.cc/2025/Conference/Submission10225/Authors" ], [ "ICLR.cc/2025/Conference/Submission10225/Reviewer_a4rf" ], [ "ICLR.cc/2025/Conference/Submission10225/Reviewer_Fgpz" ], [ "ICLR.cc/2025/Conference/Submission10225/Reviewer_1WyU" ], [ "ICLR.cc/2025/Conference/Submission10225/Reviewer_rWFx" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"Core technical content is bleeding onto the 11th page, for which current guidance is desk rejection. Moreover, none of the reviewers have thoroughly evaluated the main theoretical claims in terms of best arm identification with respect to the state of the art, and are overly positive while lacking any confidence about their assessments.\\n\\n------\\n\\nThis is a revised meta-review by PCs, as the original meta-review contained a factual error which led to the reject decision.\\n\\nPCs reviewed the reviews and discussions, and also consulted the manuscript. PCs concur with some of the strengths and weaknesses pointed by the reviewers. PCs recognize the interestingness of the conceptual generalization. PCs also took note of the mixed empirical success and the desiderata of having more systematic comparisons to related algorithms for this problem. Finally, PCs weigh in the factors of reviewers' confidence level in making their assessments.\\n\\nThe overall conclusion is that the paper is recommended to be accepted as a poster.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}", "{\"comment\": \"\\\"The presentation of this paper is very hard to follow.\\\"\\n\\n- We hope that the changes we have made help with the overall presentation. \\n\\n\\n\\\"What is delta in OUP?...\\\"\\n\\n- Delta is the failure probability. We have specified this more clearly in OUP's inputs now.\"}", "{\"comment\": \"We thank the reviewer for their time and effort.\"}", "{\"summary\": \"The authors introduced a new procedure, COUP, which claims that 1) it can run on continuous space and 2) it is faster than UP in the case of discrete space. Later experiments confirmed their claims.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper appears organized and well-written. The authors effectively emphasized the big picture in Sections 1 and 2, providing a clear understanding of this new and improved algorithm.\\n\\nThe algorithm is general enough to be applied to many optimization problems. It includes theoretical properties; however, we did not check the appendix for the validity of the proof.\", \"weaknesses\": \"na\", \"questions\": \"na\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We thank all reviewers for their time and helpful comments. We have addressed individual concerns in the comments below and we we have uploaded a new pdf of the paper that incorporates reviewers' suggestions. The notable changes are:\", \"Updated Figure 1 to ease readability: We have swapped the axes so that time runs left-to-right instead of bottom-to-top, which we think is more natural, and we have put total running time on a log-scale. We have also included more finely-grained data points.\", \"Updated Figure 5 to ease readability and make a runtime-based comparison: We have included plots for runtime-based procedures as well as utilitarian ones. We have also changed the way the averages and error regions are represented.\", \"Restructured text to move definitions ahead of algorithm descriptions.\"]}", "{\"comment\": \"Thanks. I see all of my concerns have been addressed.\"}", "{\"summary\": \"An approach for algorithm configuration is presented based on a maximizing the utility of the target algorithm. The approach follows a line of work on \\\"utilitarian procrastination (UP)\\\", extending the previous work to function in continuous (infinite) spaces. An interesting aspect of the work is that this extension does not come at the cost of hurting performance on the finite-space case, in fact the performance improves. The resulting method, COUP, has bounds proven indicating essentially how good the configurations it finds are.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. The paper moves the theoretical results for algorithm configuration in a very important direction, namely one step closer to real-world settings.\\n2. The paper is generally understandable at various levels of detail -- one need not delve into the math to understand the what and why of the paper. The math is clean and well-written, but I note that I could not completely evaluate all of the proofs.\\n3. The experimental results are quite good, especially in an area where the gains have been mostly small or just on different areas of the Pareto front of time vs quality. This approach is very competitive.\", \"weaknesses\": \"1. There are a few minor clarity issues:\\n\\t1. The start of the paper is a bit of a slog. It would have been nice to get to the point faster.\\n\\t2. I do not understand why the notation for Algorithm 1 is introduced after the algorithm is explained. I needed the notation before reading the explanation, so I ended up just being confused until I found the notation, then had to go back and read it again. \\n\\t3. In Algorithm 1, i is shadowed on line 7. Very minor, but it just seems weird (the same goes for Alg. 2).\\n2. I found the visuals in the experiments sometimes hard to read, and was confused by some aspects.\\n\\t1. Particularly the brown/purple/green combination has very similar shadings on some monitors/printers, another color scheme might be better. \\n\\t2. In Figure 3, the green line for OUP goes backwards between total time 10^0 and 10^1. Something must be wrong there.\\n\\t3. The symbols in Figure 5 do not match the legend, and in all honesty I can barely figure out what is going on here. The figures are just too small with too many points crowded in the same spots. \\n\\nOverall, I have found no major issues with the paper, but I acknowledge that I am not an expert in the math of this paper and could have overlooked something. I am also not convinced of the superiority of the utilitarian approach of AC versus runtime configuration. However, the authors identify valid limitations and I find the use of utilitarian AC plausible for certain users, thus I do not view my disagreement on this point as something to hold against this paper.\", \"questions\": \"1. See issues with experiments above.\\n2. On line 6 of algorithm 2, COUP samples new configurations at random, which is basically how all of the theoretical approaches to AC work. Is there no way of sampling configurations that might actually be good while maintaining guarantees? It feels like these approaches are all poking around in the dark rather than actually optimizing. (future work, I suppose)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"\\\"... only the top percentile is considered (which is simpler with sampling), but I assume similar conditions on the utility could be considered here as well.\\\"\\n\\n- Stronger guarantees could indeed be made with stronger assumptions about the space of configurations. Generally in algorithm configuration, we have little idea of what structure this space has. We do not, in general, expect it to be Lipschitz continuous. Often we find that small changes to a parameter within a given range have little or no effect on performance, but if the value is pushed just outside of this range, performance changes drastically. There is a body of work showing that the parameter space is often divided into regions where performance is relatively stable, separated from each other by large jumps in performance (see, e.g., Balcan et al., (2021)), and for certain problems where this structure is known, it can indeed be taken advantage of. \\n\\n\\n\\\"I think, it would have reasonable to test the algorithm against algorithms that minimize the running time...\\\" \\n\\n- We have now added this comparison to Figure 5 (second row) of the updated pdf. The utility function used coincides with (capped) average runtime.\"}", "{\"comment\": \"\\\"The impact of the extension to continuous space is unclear. I would suggest to compare COUP with OUP + fixed discretization of continuous space.\\\"\\n\\n- We see the benefit of the extension to continuous space as being the seamless applicability, in an anytime fashion, of the procedure to continuous and infinite parameter spaces, which is the reality for most algorithms. The alternative would be to sample a finite set of configurations and run OUP on this set. But how big should this set be? And what do we do if OUP finishes and we decide this set was not big enough? The only option would be to sample more configurations and then rerun OUP or, better yet, have OUP pick up where it left off, adjusting the probabilistic bounds accordingly, which is precisely what COUP does. So the real benefit of COUP over OUP is that it does this sampling internally and, importantly, does it in an anytime fashion, meaning it samples more and more configurations as time goes on. An investigation of the effect of discretization/sampling might be interesting but is not possible with our current pre-computed datasets. \\n\\n\\n\\\"The continuous structure (similarities among neighbor parameters) is not utilized in COUP. So, C in COUP does not describe its nature accurately.\\\"\\n\\n- We definitely don't want our procedure's name to be misleading. In response to your review we had a long discussion about this issue and about other candidate names. In the end, we couldn't think of a better alternative, particularly given our desire to use a name that reflects the tight connection with OUP. It seems to us that the \\\"C\\\" for \\\"Continuous\\\" does make sense because COUP can handle parameters with continuous domains (i.e., continuous in the sense of \\\"continuous random variable\\\") by taking an unbounded number of samples from these domains. In contrast, OUP (and UP, etc.) cannot: the set of samples they consider must be chosen in advance. We have revised the paper to make this justification for the name clearer and to forestall any potential misunderstanding. \\n\\n\\n\\\"In Lemma 2, don\\u2019t you need 'with probability 1 - delta'\\\"?\\n\\n- Lemma 1 says that an execution is \\\"clean\\\" with probability $1 - \\\\delta$, and Lemma 2 says that if the execution is clean, then the bounds hold. \\n\\n\\nWhy not using log scale for the vertical axes of Figure 1?\\n\\n- The updated pdf now shows total time on a log scale.\"}", "{\"comment\": \"\\\"The start of the paper is a bit of a slog. It would have been nice to get to the point faster.\\\"\\n\\n- We have tried to tighten up the first two sections and we'll give particular attention to these when making our final pass. \\n\\n\\n\\\"notation for Algorithm 1 is introduced after the algorithm is explained\\\"\\n\\n- We have now adjusted the layout so that definitions come before the algorithm and its description. \\n\\n\\n\\\"i is shadowed...\\\"\\n\\n- Fixed, thanks. \\n\\n\\n\\\"brown/purple/green combination...\\\"\\n\\n- We have changed Figure 1 significantly in the updated pdf. The procedures should now be more easily distinguishable based on their color and marker types. \\n\\n\\n\\\"In Figure 3, the green line for OUP goes backwards...\\\"\\n\\n- This is to be expected sometimes and essentially has to do with the order in which the configurations are sampled. At the end of each phase COUP has proved epsilon-optimality with respect to a particular set of configurations. We give these to OUP and ask it to prove optimality for the same epsilon. What has happened here is that a good configuration was sampled in phase 5 by COUP. When we then give this set of configurations to OUP and ask it to prove epsilon-optimality, it is able to do so more quickly than it did in phase 4 because of the presence of this good configuration. We have added an explanation of this in the updated pdf. \\n\\n\\n\\\"The symbols in Figure 5...\\\"\\n\\n- We have changed Figure 5 significantly in the updated pdf.\\n\\n\\n\\\"COUP samples new configurations at random... Is there no way of sampling configurations that might actually be good while maintaining guarantees?...\\\" \\n\\n- Indeed, we do believe this is a very promising direction for future work. To make the theoretical guarantee we do requires independent randomly-sampled configurations. However, in addition to this, some configurations may be sampled according to a predictive model. This is the approach taken by some existing heuristic algorithm configuration procedures (e.g. SMAC). For example, half of the configurations may come from random sampling and half from the predictions of a random forest which is trained along the way. The random forest will tend to focus in on good areas of the space, and the quantile guarantee can still be made, while the total runtime increases by at most a factor of 2.\"}", "{\"summary\": \"The paper considers the problem of algorithm configuration. There are two key elements that are not handled in combination by previous approaches: (1) infinite (potentially continuous) parameter spaces, and (2) utilitarian reward (which is a function of the running time).\\n\\nA new algorithm is proposed, COUP, which is based on UCB, using doubling trick for extending cap on the running time, and sampling increasing numbers of arms to deal with the infinite arm's space. A simplified version for finite number of arms, OUP, is also considered. The algorithms are shown to achieve close to optimal configuration. \\n\\nThe proposed algorithm is compared to another utilitarian algorithm UP, achieving faster convergence for the finite case. The COUP algorithm seems to perform well empirically for the many configurations case as well.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The problem considered is important, and the proposed algorithms are sound.\\n\\nThe theoretical guarantees are valuable.\\n\\nThe paper is well written.\", \"weaknesses\": \"The infinite parametric space problem seems related to me to the bandit problems in continuous/metric spaces. For those problems, the performance of the bandit algorithm is compared to the optimal solution, which is facilitated by Lipschitz or stronger conditions on the reward function. In this work only the top percentile is considered (which is simpler with sampling), but I assume similar conditions on the utility could be considered here as well.\\n\\nThe experiments use a very limited set of baselines (UP). I think, it would have reasonable to test the algorithm against algorithms that minimize the running time (with the utility coinciding with the running time, or just using the running time as surrogate measure for those algorithms). Testing non-bandit based algorithm configuration approaches would also make sense.\", \"questions\": \"The questions for me are centered around the relation with metric space bandits, and the use of additional baselines.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"\\\"The presentation of Section 3 could be improved slightly; introducing the notation before presenting Algorithm 1...\\\"\\n\\n- We have adjusted the layout in the updated pdf so that definitions come before the algorithm description. \\n\\n\\n\\\"Anonymous repo link seems to be missing?\\\"\\n\\n- Our intention was to release the code at time of publication; the \\\"anonymous repo link\\\" was just a placeholder, but we can release the code anonymously beforehand if reviewers find it important. \\n\\n\\n\\\"In Figure 1 - second row - third graph - there is a sudden drop in the total runtime (for OUP) around... Is this due to some specific structure of the problem/algorithm being analyzed?\\\"\\n\\n- This just means that at around 700 CPU days OUP was able to prove a much better epsilon (.1) than it was able to prove before (e.g., epsilon of about .2 at 500 CPU days). This may be because OUP started seeing more instances that where more representative or it could be the result of a captime doubling event revealing something about the runtime CDFs that was not observed before. \\n\\n\\n\\\"... it\\u2019s unclear to me that COUP/OUP actually outperforms [Hyperband]. Is there a reason to expect COUP/OUP would perform better, or is there something I might be overlooking?\\\" \\n\\n- We think Hyperband is generally a good algorithm and indeed it does beat OUP/COUP on some datasets with some parameter settings. But on others it is consistently worse (e.g., the minisat dataset). Hyperband is not anytime, so once we've committed to running it and we get an answer we cannot work to improve that answer without completely re-running it with more refined parameters. OUP/COUP work continually to improve the answer they give, until the user is satisfied, and in all cases they eventually find a configuration that is at least as good as the one found by Hyperband. Additionally, OUP/COUP make guarantees about the near-optimality of the returned configuration, which Hyperband is unable to do.\"}", "{\"summary\": \"The authors study the problem of utilitarian algorithm configuration with inifinite number of parameters.\\nThe key idea is i) to adopt the UCB (upper confidence bound) method rather than the SE (sequential elimination) method in balancing exploration and confirmation of the best parameter to reach the same guarantee with shorter run time, and ii) to relax the reference point of the guarantee from the best to quantiles.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"It is well-motivated, well-executed and well-written.\", \"The theoretical analysis looks intuitive and right.\"], \"weaknesses\": [\"The impact of the extension to continuous space is unclear. I would suggest to compare COUP with OUP + fixed discretization of continuous space.\", \"The continuous structure (similarities among neighbor parameters) is not utilized in COUP. So, C in COUP does not describe its nature accurately.\"], \"minor\": [\"In Lemma 2, don\\u2019t you need \\u201cwith probability 1- \\\\delta\\u201d?\", \"Why not using log scale for the vertical axes of Figure 1?\"], \"questions\": \"Can you address the points I raised in Weakness?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the updates. I think you have significantly enhanced the readability of the work, both at the beginning and in the experiments. I maintain my score.\"}", "{\"summary\": \"The paper introduces COUP (Continuous, Optimistic Utilitarian Procrastination) procedure, which explores the parameter space - possibly uncountably infinite - of a given algorithm to optimize its performance on a specified set of inputs, with performance measured through a utility function. The finite (parameter space) verison of the procedure, known as OUP, improves upon the UP (Graham et al., 2023b) by incorporating ideas from the UCB algorithm in bandit literature. Additionally, COUP generalizes the procedure to possibly uncountably infinite parameter space. While doing so, COUP retains the theoretical guarantees of the UP algorithm while being significantly faster, as demonstrated by both theoretical guarantees and empirical results.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The finite version (OUP) procedure improves over the previous (UP) procedure using ideas from UCB algorithm in bandit literature.\\n\\n2) Extensions to the infinite parameter space algorithms.\\n\\n3) Maintaining the theoretical guarantees provided by the UP procedure.\", \"weaknesses\": \"1) The presentation of Section 3 could be improved slightly; introducing the notation before presenting Algorithm 1 might enhance readability.\", \"questions\": \"1) Anonymous repo link seems to be missing?\\n\\n2) In Figure 1 - second row - third graph - there is a sudden drop in the total runtime (for OUP) around $\\\\epsilon \\\\approx 0.19$. Is this due to some specific structure of the problem/algorithm being analyzed?\\n\\n3) The authors mention that Hyperband also performs well in the percentage-gap metric, but from Figure 5, it\\u2019s unclear to me that COUP/OUP actually outperforms it. Is there a reason to expect COUP/OUP would perform better, or is there something I might be overlooking?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies utilitarian algorithm configuration, which is about automatically searching the parameter space of a given algorithm to optimise for its performance.\\n\\nThe first proposed algorithm is OUP. It is a UCB style algorithm for finding configurations with good capped mean utility, with the addition of a scheme for periodically doubling the cap time.\\n\\nThe main algorithm is COUP. It builds on a new notion of optimisation goal called (epsilon, gamma) optimal. Epsilon is about being close to optimality and gamma controls how many configurations to sample. For fixed epsilon and gamma, the inner algorithm is for the most part just OUP. The novelty of the proposed algorithm is that it allows the sharing of trials over different epsilon and gamma.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The proposed method is general and there are extensive experiments demonstrating its performance.\", \"weaknesses\": \"The presentation of this paper is very hard to follow.\", \"questions\": \"What is delta in OUP? I don't see it being referenced in the description of OUP. Is it only used in one of the subroutines?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
C9pndmSjg6
Advancing Portfolio Optimization: Hybrid Relaxation and Heuristic Approaches for Cardinality-Constrained MIQP Problems
[ "Zayn Wang" ]
The growing magnitude of investments in global markets has intensified the need for sophisticated risk mitigation strategies in portfolio optimization. Traditional portfolio optimization models that seek to minimize risk for a specified return frequently incorporate cardinality constraints, rendering them as Mixed-Integer Quadratic Programming (MIQP) challenges. These constraints elevate the problem to NP-Hard status, complicating the solution process. While heuristic methods have historically been favored for their direct approach to MIQP problems, relaxation techniques offer a strategic alternative by simplifying MIQP into a more tractable Quadratic Programming (QP) problem. We first introduce an approach that facilitates the conversion of MIQP to QP by relaxing integer constraints into continuous domains and integrating integer conditions into the objective function using Lagrange multipliers. This dual application not only eases the computational burden but preserves the integrity of the original problem's structure. An innovative diagonalization technique applied to the covariance matrix further refines our method, enhancing the fit for integer variables, as Lagrange multipliers are inherently biased towards continuous variables. We present a comparative analysis of three distinct models, Linear, Dual, and Diagonal, each employing a unique relaxation strategy. Our research evaluates their efficacy in addressing the MIQP problem under cardinality constraints. In conjunction with heuristic methods, the refined solutions from our exact relaxation models serve as a starting point for further refinement using Genetic Algorithm and Neighborhood Searching Algorithm. This hybrid methodology yields results that not only rival but occasionally surpass those achieved by the latest models and the commercial solver CPLEX. Our findings endorse the potential of combining exact and heuristic techniques in portfolio optimization, marking a significant advancement in the field.
[ "portfolio optimization", "mixed-integer quadratic programming", "relaxation" ]
https://openreview.net/pdf?id=C9pndmSjg6
https://openreview.net/forum?id=C9pndmSjg6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uLYeBNVNlw", "nF41GIFyIH", "lTseLRRo6M", "i3eLcV7MQo", "dzQUZjqXK4", "TLSqoWsykD" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730307497410, 1730277803855, 1730038575415, 1730707910184, 1730719753037, 1737826003514 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10122/Reviewer_H2Gg" ], [ "ICLR.cc/2025/Conference/Submission10122/Reviewer_WWCK" ], [ "ICLR.cc/2025/Conference/Submission10122/Reviewer_NdKn" ], [ "ICLR.cc/2025/Conference/Submission10122/Reviewer_yYFx" ], [ "ICLR.cc/2025/Conference/Submission10122/Reviewer_dtjq" ], [ "ICLR.cc/2025/Conference/Submission10122/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose an approach to portfolio optimisation that relaxes a mixed integer quadratic program MIQP to a QP and then used a genetic algorithm and neighbourhood search to find a good (low risk high yield) portfolio of investments. The authors obtain superior results to CPLEX (a commercial MIQP solver).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper presents a comprehensive set of results that does slightly better overall than other techniques.\", \"weaknesses\": \"This is a judgement call, but for me this paper does not offer enough for an audience interested in learning representation. The proposed solution involves the use of three optimisers. The gain in performance is marginal. I struggle to see what the novel contribution is in terms of machine learning. This is likely to be of interest to financial modellers.\\n\\nAs an aside, the results are presented without error bars (in tables 1, 2 and 3). The results are given to a number of significant figures seem much higher than I believe is justified. This apparent lack of care about statistical significance slightly undermines trust in the results.\", \"questions\": \"What are typical error bars for the numbers presented in tables 1, 2 and 3?\\n\\nIs there a core idea about how you learn a useful representation for your problem?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an approach for cardinality-constrained portfolio optimization problems using the Markowitz Mean-Variance (MV) approach. The authors aim to show that using linear relaxation techniques based on the dual formulation of the problem and a combination with a genetic algorithm, solution quality can be improved. Results are shown with the classical OR-Library dataset.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper follows the standards in portfolio optimization papers, it can be therefore read with clarity. The originality of the findings is rather low, since dual solution techniques and linear relaxations have been already proposed in the past. It is unclear how exactly the diagonalization models helps, however the authors might elaborate on this as I see this as the only original contribution of the paper.\\nRegarding significance, there is no justification for the contributions of the paper. While the authors claim \\\"advancements\\\" and \\\"amplification of the practical application\\\" in the introduction, this is not justified. Since there is no source code provided, the quality of the experiments performed cannot be independently validated.\", \"weaknesses\": [\"The paper unfortunately shows major weaknesses, as I describe in the following:\", \"Originality: as I explained before, the originality of the paper is very limited. At most, the diagonalization method could be considered as novel, but it lacks justification. The authors make vage claims like \\\"approach the solution better\\\" and \\\"some relaxations is preferred\\\" but they don't justify the intention behind the method. Please provide a more specific justification of the novely of the diagonalization method. Are there any examples or theoretical arguments that you can provide to back these claims?\", \"There is no clear mentioning of the contributions of the paper. The authors claim that the work \\\"showcases a significant advancement\\\" but they do not clarify what this advancement is. There is no justification for advancing the practical application. Please provide a statement for the key contributions of the paper and specific evidence of how your approach advances the practical application of portfolio optimization.\", \"Quality: the authors do not provide the source code used for the experiments. Most importantly, the authors compare with GA, SA and TS but give no details on what GA they are comparing with, details of GA, TS. For their own GA, the authors do not provide any parameters like population size, probability of crossover, the parameters p and l mentioned in section 3.2, etc. The results are therefore *non-reproducible*. I suggest the authors provide the source code for the experiments (including the specific values used for the GA) and the concrete details and parameters used for GA, SA and TS.\", \"Finally, the authors make more unjustified claims in the results section like claiming that \\\"our method has fewer outliners and thus has more stable results\\\". I guess the authors mean \\\"outliers\\\" and there is no hint as to what this could mean. Section 4.3 provides a comparison with CPLEX where the impression is that the authors compare with the optimal solutions. To my great surprise, the authors then claim to have found a better solution than CPLEX does. It is therefore unclear what the authors are doing here. Please clarify if the solutions from CPLEX are optimal and how the solutions calculated are compared againts the solutions from CPLEX. If the solutions are indeed better, it would help to discuss the implications for the use of CPLEX in this domain.\"], \"questions\": \"My first and foremost suggestion: please add justifications. The paper contains a significant number of unjustified claims of \\\"showcasing significant advancement\\\", but fails to exactly pinpoint where this advancements are. There is no hint to practial application as there are for instance no performance measurements.\", \"there_are_also_a_number_of_open_questions\": [\"In section 3.1, $M^{relax}$ is undefined.\", \"In section 3.2 the authors do not address what happens with duplicate phenotypes. The authors ignore the existing literature in other encodings like set encoding for genetic algorithms, see for instance Ruiz-Torrubiano, R., & Su\\u00e1rez, A. (2010). Hybrid approaches and dimensionality reduction for portfolio selection with cardinality constraints. IEEE Computational Intelligence Magazine, 5(2), 92-107. Artikel 5447939. https://doi.org/10.1109/MCI.2010.936308.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper considers a class of cardinality constrained mixed-integer quadratic programming (MIQP) models, which are traditional mathematical models for portfolio optimization (with return mean and variance being minimized in the objective with investment budget constraint). The authors propose two heuristic approaches (genetic algorithm and neighborhood searching algorithm) and run computational studies to compare their approaches with an off-the-shelf optimization solver (i.e., CPLEX) for different formulations.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper aims to solve an NP-hard problem and introduces several alternative relaxations for MIQP (namely, Linear, Dual, Diag) in the literature.\", \"weaknesses\": \"1.\\tThe paper mainly focuses on heuristic approaches, without solution optimality guarantees. The two heuristics, i.e., genetic algorithm and neighborhood searching algorithm, are also too generic, and did not utilize any special problem structures to improve the results.\\n\\n2.\\tWith no contribution in theoretical studies, the numerical studies in this paper do not test any state-of-the-art instances (i.e., all stock data are from 1990\\u2019s) nor sufficiently large-scale instances, to at least show that the heuristic and relaxation can gain computational advantages in terms of time and solution quality. The benchmarked solver is CPLEX (without its version information), and it is an outdated solver as well and cannot represent other more state-of-the-art integer programming solvers. \\n\\n3.\\tThe proposed methods and the research itself do not seem to be strongly related to the focus of the conference. \\n\\n4.\\tIn addition to the mean-variance way of doing portfolio optimization, which is the backbone of the MIQP model, there are other advances and studies in the portfolio optimization literature, which define risk in alternative quantitative ways under uncertain returns, and the MIQP model cannot capture these cases.\", \"questions\": \"1.\\tThe paper was about solving MIQP in general, with portfolio optimization as a demonstrating example. However, are there any special portfolio design problem structures they are considering? If not, I am not convinced that the paper is \\u201cadvancing portfolio optimization\\u201d as currently stated in the title.\\n2.\\tWhat is the version of CPLEX that the authors are using? Why not using Gurobi, which has reported significantly better performance for MIQP than most commercial solvers? \\n3.\\tFor the heuristic approaches, what are the merits and contributions? Are they providing better solutions with provable guarantees? Are they performing better numerically? Can they handle uncertainties in portfolio optimization better?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a hybrid approach to solving the cardinality-constrained portfolio Mixed-Integer Quadratic Programming (MIQP) problem. The authors employ three relaxation techniques: Linear, Dual, and Diag to simplify the MIQP problem by relaxing constraints and embedding them into the objective function using Lagrange multipliers. This relaxation process converts the MIQP problem into a more manageable Quadratic Programming (QP) problem, a starting point for further refinement. Refinement is achieved using a combination of Genetic Algorithms (GA) and Neighborhood Search, which improves solution quality by iterating on the relaxed models. The paper evaluates this hybrid methodology, showing that it performs competitively or better than other approaches, including CPLEX, across multiple datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Originality: The paper introduces a novel hybrid approach, combining three distinct relaxation models with heuristic methods to solve MIQP with cardinality constraints.\", \"clarity\": \"The paper is well-structured, making the approach easy to follow.\", \"significance\": \"The approach has practical relevance for portfolio optimization and potential applications in other constraint-heavy optimization problems, underscoring its impact.\", \"weaknesses\": \"Misinterpretation of Lagrangian Relaxation: The paper states that \\\"integer constraints\\\" are relaxed via Lagrangian relaxation. However, Lagrangian relaxation typically relaxed specific problem constraints rather than integrality requirements, which are handled differently in optimization. Upon review, it appears the authors have relaxed all constraints but not integrality requirements. Relaxing too many constraints in Lagrangian relaxation can result in overly loose bounds and challenges in finding feasible solutions. This oversight indicates a misunderstanding of Lagrangian relaxation principles, which is problematic given the paper\\u2019s reliance on this method as a core part of its approach. Could the authors discuss the potential implications of their relaxation choices on solution quality and feasibility?\", \"limited_discussion_on_non_smooth_optimization_challenges\": \"Lagrangian relaxation operates in the dual space, where the convergence of Lagrangian multipliers is impacted by non-smoothness, necessitating careful step-size selection for stability. Non-smoothness of the dual function comes from the presence of integer variables. This is well-documented in the literature on non-smooth optimization, yet the paper does not address the challenges associated with non-smoothness or how they were managed in this context. Including a discussion of the stepsizes to ensure stable convergence, especially in the dual space, would strengthen the technical rigor of the paper. Could the authors address how they handled the non-smoothness issues in their approach, particularly in relation to step-size selection and convergence stability in the dual space?\", \"insufficient_and_outdated_references_on_lagrangian_relaxation\": \"Given that Lagrangian relaxation is foundational to the proposed approach, the paper's references to it are limited and largely outdated. While older sources are often foundational, advancements in Lagrangian relaxation techniques, especially those addressing dual convergence and stability, are essential to understanding and enhancing the approach. A review of recent literature in this area, specifically regarding improvements in stability and bounding methods, would be valuable. The reviewer would suggest referring, for example, to the following recent paper: M. A. Bragin, \\\"Survey on Lagrangian Relaxation for MILP: Importance, Challenges, Historical Review, Recent Advancements, and Opportunities,\\\" Annals of Operations Research, Volume 333, 2024, pp. 29-45.\", \"questions\": \"Step-size Selection: How are the step sizes chosen for the dual space optimization? Given the dual function's non-smooth nature, step-sizing is crucial for stable convergence. Please clarify the approach used here and any guidelines followed for tuning step sizes.\", \"convergence_guarantees\": \"Does the proposed approach offer any theoretical convergence guarantees in the dual space? If so, what are the conditions under which convergence is ensured? If not, are there empirical observations on how often the method converges in practice?\", \"convergence_speed_in_dual_space\": \"How fast does the algorithm converge in the dual space? Have you tracked or measured the convergence rate, and if so, could you provide insights on the number of iterations typically required to reach a stable solution?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes heuristic approaches that combine for solving cardinality-constrained MIQP problems. This approache combine different relaxations with a heuristics (genetic algorihms and neighborhood search). In a set of experiment with existing benchmark instances, the authors evaluate the gap to optimal solution obtained with their approach (without reporting solution times).\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper presents an interesting approach for solving cardinality-constrained MIQPs that combines (partially new) relaxations with heuristic search approaches. As far as I can see, the approaches are original; however, I cannot tell the quality or significance of the approaches since the authors do not report any solution times, which would be necessary to properly assses this.\\n\\n2. The results indicate that the approach provides high-quality solutions.\", \"weaknesses\": \"1. The paper deals with a fairly specific class of optimization problems which is not of interest to a broader ICLR audience, and the paper does not involve any machine learning. It would be better suited for an optimization outlet.\\n\\n2. A motivation for proposing a heuristic solution approach is to find solutions faster that using exact approaches. The paper, however, does not report any solution time at all, and thus the readers have no idea if their approach is acutally faster than exact state-of-the-art solvers such as CPLEX or Gurobi.\\n\\n3. I feel that the experimental results reported in Table 1 are flawed. How can the results of a heuristic approach be better than those obtained with an exact approach? This can only be the case if the model does not reflect the evaluation criterion.\", \"questions\": \"1. Please report the solution times of all evaluated approaches, including (exact) CPLEX.\\n\\n2. Consider also using Gurobi as an additional benchmark for state-of-the-art exact solvers; CPLEX development is basically stagnating since a couple of years.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
C9ju8QQSCv
Can LLMs Solve Longer Math Word Problems Better?
[ "Xin Xu", "Tong Xiao", "Zitong Chao", "Zhenya Huang", "Can Yang", "Yang Wang" ]
Math Word Problems (MWPs) play a vital role in assessing the capabilities of Large Language Models (LLMs), yet current research primarily focuses on questions with concise contexts. The impact of longer contexts on mathematical reasoning remains under-explored. This study pioneers the investigation of Context Length Generalizability (CoLeG), which refers to the ability of LLMs to solve MWPs with extended narratives. We introduce Extended Grade-School Math (E-GSM), a collection of MWPs featuring lengthy narratives, and propose two novel metrics to evaluate the efficacy and resilience of LLMs in tackling these problems. Our analysis of existing zero-shot prompting techniques with proprietary LLMs along with open-source LLMs reveals a general deficiency in CoLeG. To alleviate these issues, we propose tailored approaches for different categories of LLMs. For proprietary LLMs, we introduce a new instructional prompt designed to mitigate the impact of long contexts. For open-source LLMs, we develop a novel auxiliary task for fine-tuning to enhance CoLeG. Our comprehensive results demonstrate the effectiveness of our proposed methods, showing improved performance on E-GSM. Additionally, we conduct an in-depth analysis to differentiate the effects of semantic understanding and reasoning efficacy, showing that our methods improves the latter. We also establish the generalizability of our methods across several other MWP benchmarks. Our findings highlight the limitations of current LLMs and offer practical solutions correspondingly, paving the way for further exploration of model generalizability and training methodologies.
[ "Large Language Models", "Math Reasoning", "Long Math Word Problems" ]
Accept (Poster)
https://openreview.net/pdf?id=C9ju8QQSCv
https://openreview.net/forum?id=C9ju8QQSCv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxy695BLrL", "y0fq9JmuD8", "vR4wiWqejw", "uHZT5Z4Fzf", "szPuiCc8zy", "rum6cF15Wl", "n5ALiUYUxe", "eryuNsOV6W", "egW7QGf9Dq", "be8MkmVLEL", "WiddiUkR1N", "WVy2Gs2iN9", "TEWokgzSB8", "SqCc7Ze45K", "QJWBoNzZHV", "NAqen96soT", "GP5iPnjYbo", "GB51cepEqm", "G7UFOE2muo", "EhxeL6RIDA", "DlugsVvlhz", "AUNwQChNs4", "7sCWMJnaS0", "65rShZOHgP", "4KtDm9sIg7", "2b5vIb64iJ", "2TAEasC3vq", "2Rvf8nyCGZ", "2GzTaEVGlI", "2DBIfABeGu", "0sEQxr37gj" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1731922029925, 1732626308530, 1732261811308, 1732264581729, 1732262448492, 1732160856291, 1737523397823, 1732671089535, 1732762045019, 1732763194270, 1732005595307, 1729519064581, 1730175040154, 1732677382895, 1730806286502, 1732611834924, 1732006012559, 1732760568944, 1731918525069, 1734835176328, 1732764941717, 1732261548074, 1732676217132, 1732264112290, 1732156780887, 1732764812801, 1732264337253, 1732358059590, 1731987516571, 1732673011196, 1729499800875 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_Dh39" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_Dh39" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_Dh39" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_jmci" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_Dh39" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_eQsN" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_cYhv" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_Dh39" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Area_Chair_sZqg" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_Dh39" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_Dh39" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_Dh39" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_Dh39" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Authors" ], [ "ICLR.cc/2025/Conference/Submission477/Reviewer_cYhv" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer jmci\", \"comment\": \"Dear Reviewer jmci,\\n\\nThank you for your time to review our work! We will answer your questions as follows:\\n\\n> The paper focuses on LLMs tackling longer math word problems, rather than genuinely difficult ones. Addressing truly challenging problems would likely yield more impactful and valuable research insights.\\n\\nThank you for your concern! Our motivation stems from observing that even for grade school calculation problems, which are \\\"not truly challenging\\\", LLMs exhibit discrepancies in performance when faced with problems that have longer contexts (as discussed in Section 2.1). We find that the difficulty level of grade school MWPs is not a confounder of the impact of context length on math reasoning performance, as demonstrated in Section 2.1. We believe that the challenges posed by long contexts in MWPs represent a significant \\u201cchallenging point\\u201d for current LLMs. \\n\\nOur study is centered on the influence of context length on LLM performance. To ensure a clear focus, we have deliberately isolated the effect of difficulty by controlling the difficulty level when creating E-GSM. We believe that pinpointing the current limitations of LLMs in solving MWPs could offer valuable insights and have a significant impact on future research and development. \\n\\n> A deeper analysis of the types of errors LLMs make on extended MWPs would strengthen the paper. This could shed light on whether mistakes stem from misinterpreting context, losing track of key information, or actual computational errors.\\n\\nThank you for your suggestion! We have already included a deeper analysis to understand the underlying factors of performance decrease on E-GSM in Section 4.2 in the original manuscript (Section 4.3 in our updated version). As suggested, we have added an error analysis on 50 randomly chosen bad cases in the fourth round of E-GSM. We find that 46% (23/50) samples failed due to the incorrect extraction of known conditions and the remains are due to flawed reasoning paths. We have **included this part in Appendix B.2 in our updated manuscript**. \\n\\n> The authors don't explore whether breaking down problems into atomic facts could help solve extended MWPs. It would be worthwhile to compare their methods against a baseline that first extracts crucial information from the lengthy context before attempting a solution. The techniques discussed in https://arxiv.org/abs/2305.14251 could be relevant here.\\n\\nThank you for pointing this out! We think this is not a baseline for the following reasons: \\n\\n- When we break down sentences into atomic facts, the context is still long or even longer. \\n\\n- They are totally different tasks. [1] uses this technique to do factual precision evaluation. \\n\\n- The idea might be in some sense similar to our CoRe. \\n\\nWe have **included such discussion in Section 3.1 in our revised manuscript and cite this paper appropriately**: \\n\\n\\u201c[1] proposes a similar approach, suggesting that breaking down information into smaller components can enhance the evaluation of factual precision.\\u201d \\n \\n> The table captions should be placed above the tables, not below, to comply with ICLR's official template guidelines.\\n> The \\\"Experimental Setup\\\" section doesn't belong under Methodology. It should be moved to the Experiments section, alongside the results analysis.\\n\\nThank you for your comments! We have revised theses accordingly in our updated manuscript. \\n\\nWe hope our response will address your concerns. If you have any further questions, feel free to discuss with us!\\n\\nSincerely,\\n\\nAuthors\\n\\n[1] Min, S., Krishna, K., Lyu, X., Lewis, M., Yih, W. T., Koh, P. W., ... & Hajishirzi, H. (2023). Factscore: Fine-grained atomic evaluation of factual precision in long form text generation. arXiv preprint arXiv:2305.14251.\"}", "{\"comment\": \"Thanks for the update.\\n\\nMy major concern remains, i.e. the discrepancy in characteristics of your artificial long questions and the real natural long questions, and the question length difference in the real testbeds and your E-GSM. They are extremely important because you are studying \\\"Can LLMs Solve Long Math Word Problems Better?\\\", I do expect the conduct should involve real and natural long math questions.\"}", "{\"comment\": \">>\\u201cApart from 7,473 annotated examples available in GSM8K training set, we get D0 that incorporate 38,507 valid CoT data points \\u2026\\u201d, the numbers here confused me. If the authors generated five reasoning paths for each question in the training set, at most, D0 can have 7,473*5 questions, less than 38,507.\\n\\n>As shown in Section 3.2, we filter out examples whose answers do not align with the ground truth. This process is referred to as RFT [3] and is widely adopted in the field [2, 3, 4].\\n\\nAfter the filtering, should it be less than 7,473*5?\"}", "{\"comment\": \"Yes, 7473 is already the training set. \\\"The original training set\\\" refers to the GSM8K training set. Then we also generate 7473 * 5 and filter some wrong answers. The total number of questions after filtering should be less than 7473 + 7473*5 (we filter some in this part). By the way, 38,507 > 37,365 = 7473 *5.\\n\\nTo elaborate more, 38507 - 7473 (the GSM8K training set) = 31,034, which is our generated new samples after filtering. We filter 7473*5 - 31,034 = 6,331 examples whose answers are wrong.\\n\\nWe **have updated this sentence in our revised manuscript** to make it more accurate (L288-289 in our new version). Sorry for the confusion!\"}", "{\"comment\": \"Thank you for your reply! Let us solve this one first.\\n\\n> After the filtering, should it be less than 7,473*5?\", \"as_we_mentioned_in_l288_l289\": \"\\\"Apart from 7,473 annotated examples available in GSM8K training set\\\". We also incorporated the original training set, so the total number before filtering is 7473*6.\"}", "{\"comment\": \"Thank you for your response! We further explain as follows:\\n\\n**Q**: About the GPT4 experiments.\\n\\n**A**: As we have already explained, \\\"The reason why we chose GPT-3.5-turbo is that it is cheap and efficient.\\\". As you suggest, **we have already added the analysis of GPT-4o in Appendix F in our revised manuscript**.\\n\\n**Q**: About the gap between the generated verbose questions and the real-world long problems.\\n\\n**A**: You might have some misunderstandings! **Our E-GSM serves as a test bed of our research focus, and we have returned to the real-world problems in Table 3 and Section 4.4**. Our research aims to investigate the effect of context length on math reasoning performance, specifically focusing on the inconsistencies observed when solving math word problems (MWPs) with longer contexts. We isolate the effect of intrinsic difficulty to ensure a clear understanding of how context length alone affects performance (as discussed in Section 2.1). By utilizing E-GSM and our devised metrics, we can effectively analyze how extending the context of the same problem impacts the performance of LLMs (refer to our analysis in Section 4.2).\", \"you_might_have_overlooked_some_crucial_aspects_of_our_work_that_demonstrate_the_efficacy_of_our_methods_in_addressing_real_world_mwps\": [\"Existing MWP benchmarks are not long, as highlighted in the caption of Table 3, which indicates that these benchmarks have fewer than 100 tokens. We are not introducing E-GSM as a new benchmark for long MWPs; rather, we are using it as a test ground to study our research question.\", \"The results in Table 3 show that our method also provides benefits for solving real-world MWPs.\", \"The analysis in Section 4.4 demonstrates that our method yields better improvements for relatively long questions in the GSM8K dataset.\", \"If you have any questions, please drop on us, thank you!\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We truly appreciate the time and effort you\\u2019ve dedicated to reviewing our work.\\n\\nBased on your review, it appears that there are not many points requiring revision, except the point about \\\"creating a small set of extremely long MWPs (e.g., 1000+ tokens).\\\" We understand the value of investigating this scenario and considered this aspect during the rebuttal. However, we intentionally did not include such an experiment as we have anticipated Reviewer Dh39 will not buy this point either.\\n\\nWe believe our approach has already demonstrated strong potential for tackling extremely long MWPs, as evidenced by the results from the E-GSM evaluation and generalization results in other MWP benchmarks. As highlighted in the manuscript, current MWP benchmarks do not include examples with very long contexts. Therefore, instead of manually constructing such benchmarks, we chose to focus on showcasing how our SFT-based approach can effectively generate synthesized training data from short MWPs while being well-suited for extending to tasks with longer contexts.\\n\\nMay we kindly ask for additional suggestions to further strengthen our work? Your feedback in this regard would be incredibly valuable for us to improve our work.\\n\\nThank you again for your insightful suggestions and for helping us further improve our study. We remain open to additional feedback and ways we can further improve.\"}", "{\"comment\": \"By your point, we should also find approximately the same number in the question and there is discrepancy between GSM-hard and real-world scenario as no humans will encounter such problems in their life.\\n\\n\\nThe second reference is [2] Large Language Models Can Be Easily Distracted by Irrelevant Context. ICML 2023\\n\\n> Needle in the Haystack for Memory Based Large Language Models not a peer-reviewed paper.\\n\\nThis test is run by GPT-4, Claude, and many other famous LLMs [1, 2], which has shown its usefulness. It is also artificial case, the point is to test LLMs' capability from various facets.\\n\\n[1] Liu, A., Feng, B., Wang, B., Wang, B., Liu, B., Zhao, C., ... & Xu, Z. (2024). Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model. arXiv preprint arXiv:2405.04434.\\n\\n[2] Yang, A., Yang, B., Hui, B., Zheng, B., Yu, B., Zhou, C., ... & Fan, Z. (2024). Qwen2 technical report. arXiv preprint arXiv:2407.10671.\"}", "{\"comment\": \"The original second reference was \\\"Scaling Relationship on Learning Mathematical Reasoning with Large Language Models\\\". I was not notified your edits of changing this reference to \\\"Large Language Models Can Be Easily Distracted by Irrelevant Context.\\\"\"}", "{\"title\": \"Response to Reviewer Dh39 (1/2)\", \"comment\": \"Dear Reviewer Dh39,\\n\\nThank you for your time to review our work! We will answer your questions as follows:\\n\\n> The paper explores the artificial long math problems, but in real cases, there are seldom questions written in the way that the authors presented, i.e. very verbose questions talking about a relatively simple math problem. Therefore, it is unknown whether the conduct here can help in solving real-world long math problems where although the question is quite long, it already describes the problem in a succinct way that it could. Better solving them is our ultimate goal, rather than solving the artificial verbose problems that are less likely to exist in the real world.\\n\\nOur research aims to investigate the effect of context length on math reasoning performance, specifically focusing on the inconsistencies observed when solving math word problems (MWPs) with longer contexts. We isolate the effect of intrinsic difficulty to ensure a clear understanding of how context length alone affects performance (as discussed in Section 2.1). By utilizing E-GSM and our devised metrics, we can effectively analyze how extending the context of the same problem impacts the performance of LLMs (refer to our analysis in Section 4.2).\", \"you_might_have_overlooked_some_crucial_aspects_of_our_work_that_demonstrate_the_efficacy_of_our_methods_in_addressing_real_world_mwps\": \"- Existing MWP benchmarks are not long, as highlighted in the caption of Table 3, which indicates that these benchmarks have fewer than 100 tokens. We are not introducing E-GSM as a new benchmark for long MWPs; rather, we are using it as a test ground to study our research question.\\n\\n- The results in Table 3 show that our method also provides benefits for solving real-world MWPs.\\n\\n- The analysis in Section 4.4 demonstrates that our method yields better improvements for relatively long questions in the GSM8K dataset.\\n\\n\\n> In Section 2.1, the authors examined the performance discrepancy of ChatGPT 3.5 in solving two versions (i.e. long form v.s. short form) of the same questions and concluded that LLMs struggle to answer math word problems with longer context. However, ChatGPT 3.5 is a relatively weak model now, I would suggest the authors do the same analysis with stronger open-source and proprietary LLMs.\\n\\nThe reason why we chose GPT-3.5-turbo is that it is cheap and efficient. Additionally, at the time we conduct our experiments, GPT-4o was not released. As suggested, we have added one strong model to do the same analysis in Appendix F in our revised manuscript. \\n\\n > Still in Section 2.1, the analysis here is based on real math questions, but the long questions in E-GSM are artificial. Therefore, it is not convincing to me that the conclusion in Section 2.1 can provide a solid foundation for the subsequent conduct.\\n\\nSection 2.1 serves as the motivation for our work, as it highlights our finding that in GSM8K, LLMs struggle to solve relatively long problems effectively. We have specifically isolated the effect of difficulty level, demonstrating that problem context length is associated with degraded performance, which is a significant limitation of current LLMs. Our experiments on E-GSM confirm that when the context of the same problem is lengthened, LLM performance declines, consistent with the findings in Section 2.1. Building on this foundation, we propose different methods for both closed-source and open-source LLMs, demonstrating that our approaches are beneficial not only for E-GSM but also for some real-world MWPs. This underscores why Section 2.1 is a crucial foundation for our research.\\n\\n\\n> The writing needs a thorough improvement:\\n\\u201cHuman evaluation details are provided in Appendix A.4.\\u201d has a wrong reference.\\nIn the first paragraph of Section 3, the subsections should be introduced in order.\\nThe second sentence of Section 3.1 has redundancy.\\nThe first two sentences of Section 3.2 are not about open-source LLMs, therefore, they cannot help develop this section. The third sentence is redundant. In the fourth sentence, \\u201ctheir generated reasoning paths\\u201d should be referred to the place that telling how it is done. The loss function has a typo, should be \\u201c (q, e, a)\\u201d.\\nSection 3.3, \\u201cTo negate the influence of few-shot demonstrations\\u201d, should be specific, what is the influence?\\nRepeated sentences in the third paragraph of Section 4.1.\\n\\nThank you for pointing this out! We have revised these in our updated manuscript.\"}", "{\"summary\": \"This paper investigates the ability of LLMs to solve math word problems (MWPs) with longer contexts, introducing the concept of Context Length Generalizability (CoLeG). The key contributions are:(1) Creating Extended Grade-School Math (E-GSM), a dataset of MWPs with extended narratives. (2) Proposing two metrics to evaluate LLMs' efficacy and resilience on E-GSM. (3) Developing tailored prompts for proprietary LLMs to improve CoLeG. (4) Using extension as an auxiliary fine-tuning task for open-source LLMs. (5) Analyzing the impact on semantic understanding vs reasoning efficacy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Strong motivation through rigorous statistical analysis shows LLMs struggle with longer MWPs (Section 2.1)\\n\\nProposes creative solutions (CoRe prompting and extension fine-tuning) to address identified limitations\\n\\nWell-designed metrics (CoLeG-E and CoLeG-R) that capture both efficacy and robustness of LLMs on long MWPs\\n\\nSufficient experiments have proven the effectiveness of the method\", \"weaknesses\": \"The paper focuses on LLMs tackling longer math word problems, rather than genuinely difficult ones. Addressing truly challenging problems would likely yield more impactful and valuable research insights.\\n\\nA deeper analysis of the types of errors LLMs make on extended MWPs would strengthen the paper. This could shed light on whether mistakes stem from misinterpreting context, losing track of key information, or actual computational errors.\\n\\nThe authors don't explore whether breaking down problems into atomic facts could help solve extended MWPs. It would be worthwhile to compare their methods against a baseline that first extracts crucial information from the lengthy context before attempting a solution. The techniques discussed in https://arxiv.org/abs/2305.14251 could be relevant here.\\n\\nThe table captions should be placed above the tables, not below, to comply with ICLR's official template guidelines.\\n\\nThe \\\"Experimental Setup\\\" section doesn't belong under Methodology. It should be moved to the Experiments section, alongside the results analysis.\", \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors investigated the performance of LLMs in solving long math problems. They first examined the performance discrepancy of ChatGPT 3.5 in solving two versions (i.e. long form v.s. short form) of the same questions and concluded that LLMs struggle to answer math word problems with longer context.\\n\\nThen, they propose an automatic approach to extend the GSM questions into their long versions (named the obtained dataset E-GSM), with the same computation logic remaining, as far as they could. \\nAfter that, the paper presented a method called CoRe to help proprietary LLMs better handle these long-form questions. \\nFor the open source LLMs, the authors fine-tuned them with a fine-tuning dataset comprising 65K CoT data, created by the authors. \\n\\n\\nThe paper introduced E-GSM containing artificial long math problems, but in real cases, there are seldom questions written in the way that the authors presented, i.e. very verbose questions talking about a relatively simple math problem. Therefore, it is unknown whether the conduct here can help in solving real-world long math problems where although the question is quite long, it already describes the problem in a succinct way that it could. Better solving them is our goal, rather than solving the artificial verbose problems that are less likely to exist in the real world. Although they show the same characteristic length-wise, the capability of solving the latter is not necessarily helpful for solving the former.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper explored the impact of question length on LLMs\\u2019 performance and proposed a method to extend the length of GSM questions. The paper presented a method called CoRe to help proprietary LLMs better handle these long-form questions. For the open source LLMs, the authors fine-tuned them with a fine-tuning dataset comprising 65K CoT data, created by the authors.\", \"weaknesses\": [\"1. The paper explores the artificial long math problems, but in real cases, there are seldom questions written in the way that the authors presented, i.e. very verbose questions talking about a relatively simple math problem. Therefore, it is unknown whether the conduct here can help in solving real-world long math problems where although the question is quite long, it already describes the problem in a succinct way that it could. Better solving them is our ultimate goal, rather than solving the artificial verbose problems that are less likely to exist in the real world.\", \"2. Besides the above major point, there are more points:\", \"In Section 2.1, the authors examined the performance discrepancy of ChatGPT 3.5 in solving two versions (i.e. long form v.s. short form) of the same questions and concluded that LLMs struggle to answer math word problems with longer context. However, ChatGPT 3.5 is a relatively weak model now, I would suggest the authors do the same analysis with stronger open-source and proprietary LLMs.\", \"Still in Section 2.1, the analysis here is based on real math questions, but the long questions in E-GSM are artificial. Therefore, it is not convincing to me that the conclusion in Section 2.1 can provide a solid foundation for the subsequent conduct.\", \"3. Many parts are not clear, see the questions section.\", \"4. The writing needs a thorough improvement:\", \"\\u201cHuman evaluation details are provided in Appendix A.4.\\u201d has a wrong reference.\", \"In the first paragraph of Section 3, the subsections should be introduced in order.\", \"The second sentence of Section 3.1 has redundancy.\", \"The first two sentences of Section 3.2 are not about open-source LLMs, therefore, they cannot help develop this section. The third sentence is redundant. In the fourth sentence, \\u201ctheir generated reasoning paths\\u201d should be referred to the place that telling how it is done. The loss function has a typo, should be \\u201c (q, e, a)\\u201d.\", \"Section 3.3, \\u201cTo negate the influence of few-shot demonstrations\\u201d, should be specific, what is the influence?\", \"Repeated sentences in the third paragraph of Section 4.1.\"], \"questions\": \"1. According to \\u201cEvaluation results shows that 94.5% questions possess accepatable quality\\u201d, the total questions from rounds 1 to 4 should be about 5K. But in Table 1, it is only 4.5K.\\n\\n2. As shown in Table 1, different rounds have different numbers of questions, what\\u2019s the impact on the defined metrics? namely CoLeG-E and CoLeG-R?\\n\\n3. In Table 2, were the fine-tuned models evaluated with the CoRe method? can they be tested in the same way as those proprietary models? \\n\\n4. \\u201cApart from 7,473 annotated examples available in GSM8K training set, we get D0 that incorporate 38,507 valid CoT data points \\u2026\\u201d, the numbers here confused me. If the authors generated five reasoning paths for each question in the training set, at most, D0 can have 7,473*5 questions, less than 38,507.\\n\\n5. In Section C.2, \\u201cThe results suggest scaling up model scales and SFT dataset can further improve CoLeG.\\u201d, this conclusion may not be valid. Under CoLeG-R, after the SFT on D0, D1, and D2, the performance is not improved.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your question!\", \"we_have_already_mentioned_the_main_focus_of_e_gsm\": [\"\\\"Our research aims to investigate the effect of context length on math reasoning performance, specifically focusing on the inconsistencies observed when solving math word problems (MWPs) with longer contexts. We isolate the effect of intrinsic difficulty to ensure a clear understanding of how context length alone affects performance (as discussed in Section 2.1).\\\"\", \"\\\"Introducing such an benchmark could deserve a paper in \\\"dataset and benchmark\\\" track. That is why we resort to transforming existing benchmarks (GSM8K) to get a new testbed. We are studying a limitation of current LLMs, not releasing a benchmark.\\\"\"], \"additional_reasons\": \"- As existing MWPs benchmarks are not that long, how could we test the performance on long MWPs? That is why we resort to transforming existing benchmarks (GSM8K) to get a new testbed. We are studying a limitation of current LLMs, not releasing a benchmark. \\n\\n- By your point, if a benchmark not exists, there is no need to study this field? In fact, there are many endeavor [1, 2] that adapt existing benchmarks to study specific research questions. They are no real math problems occurring in the way in [1, 2]. However, it is still worthwhile to do so because we expect our LLMs becomes stronger and stronger and could handle any unreal case. Another unreal case should be [3]. One of the reasons to conduct these is to inspect LLMs' ability from different aspects. Our research falls into this category.\\n\\n[1] https://huggingface.co/datasets/reasoning-machines/gsm-hard\\n\\n[2] Large Language Models Can Be Easily Distracted by Irrelevant Context. ICML 2023. https://arxiv.org/abs/2302.00093\\n\\n[3] https://github.com/gkamradt/LLMTest_NeedleInAHaystack\"}", "{\"summary\": \"This work examines the effect of extended contexts on mathematical reasoning and introduces the Extended Grade-School Math (E-GSM) dataset, featuring math problems with lengthy narratives. Analysis reveals that current LLMs struggle with E-GSM, prompting the authors to propose new methods to address these challenges.\\n\\nFor proprietary LLMs, they introduce a new instructional prompt, while for open-source LLMs, they develop a novel auxiliary fine-tuning task. These approaches aim to enhance model performance in handling extended-context MWPs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper introduces E-GSM, a dataset with lengthy, distracting sentences that make it considerably more challenging than the original GSM. This dataset offers a valuable tool for evaluating the robustness of LLMs.\", \"The approach used to create E-GSM can also be applied to expand existing math training datasets, providing new supervised fine-tuning (SFT) data in the math domain.\"], \"weaknesses\": \"- The augmented math questions may include contradicting sentences. The augmented math questions may become unsolvable or yield answers that differ from the original ones.\\nAlthough human evaluations on 200 samples suggest that \\u201c94.5% of questions meet acceptable quality,\\u201d this accuracy may still be inadequate, particularly given that the labels in the GSM8K test set might contain errors.\\nAn alternative could be to release these 200 samples as a verified subset of the E-GSM dataset. Reporting CoLeG-E and CoLeG-R results on the 200 samples, both with and without verification, would also be helpful.\\n\\n- In Table 2, the higher results w/ $\\\\mathcal{D}$ (compared to w/ $\\\\mathcal{D_0}$) may because the size of $\\\\mathcal{D}$ is larger than $\\\\mathcal{D_0}$.\", \"questions\": \"- How is E-GSM different from GSM-IC[1]?\\n\\n[1] Large Language Models Can Be Easily Distracted by Irrelevant Context. ICML 2023. https://arxiv.org/abs/2302.00093\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Thanks for the clarifications that you have made. I appreciate it.\\nWhile it does clear up some of the concerns, I believe the paper will significantly benefit from a thorough revision.\\nFor the time being, I have decided to keep my scores.\\nThanks.\"}", "{\"title\": \"Response to Reviewer Dh39 (2/2)\", \"comment\": \"> According to \\u201cEvaluation results shows that 94.5% questions possess accepatable quality\\u201d, the total questions from rounds 1 to 4 should be about 5K. But in Table 1, it is only 4.5K.\\n\\nAs explained in Lines 173\\u2013176, we employ two heuristics to filter out \\\"bad\\\" extended questions. The specifics of these heuristics can be found in Appendix A.3, while the filtering process is detailed in Appendix A.4. \\n\\nThe core idea behind our approach is to use entailment and solvability as metrics to filter out a substantial portion of questions, ensuring that all \\\"bad\\\" questions identified during our human evaluation are eliminated. This screening process explains why the number of questions presented in Table 1 diminishes with each successive round. \\n\\n> As shown in Table 1, different rounds have different numbers of questions, what\\u2019s the impact on the defined metrics? namely CoLeG-E and CoLeG-R?\\n\\nPlease refer to Section 2.3 for the calculation process of our metrics. They are well-defined and specifically defined for different number of questions in each round. Additionally, they are fair for different LLMs. \\nAdditionally, CoLeG-E is defined according to the number of questions in the fourth round, and CoLeG-R is determined by accuracy in round 0 and round 4. As the number of questions of each round is larger than 1000, accuracy on each round is well-defined and statistically reliable.\\n\\n> In Table 2, were the fine-tuned models evaluated with the CoRe method? can they be tested in the same way as those proprietary models?\\n\\nNo, they are not evaluated using the CoRe method. As detailed in Appendix B.5, we use the prompt specified in Table 8 for evaluation. They cannot be tested in the same manner as proprietary models because the evaluation prompt needs to be aligned with the training prompt, which is a common practice in the field [1, 2].\\n\\n> \\u201cApart from 7,473 annotated examples available in GSM8K training set, we get D0 that incorporate 38,507 valid CoT data points \\u2026\\u201d, the numbers here confused me. If the authors generated five reasoning paths for each question in the training set, at most, D0 can have 7,473*5 questions, less than 38,507.\\n\\nAs shown in Section 3.2, we filter out examples whose answers do not align with the ground truth. This process is referred to as RFT [3] and is widely adopted in the field [2, 3, 4].\\n\\n> In Section C.2, \\u201cThe results suggest scaling up model scales and SFT dataset can further improve CoLeG.\\u201d, this conclusion may not be valid. Under CoLeG-R, after the SFT on D0, D1, and D2, the performance is not improved.\\n\\nCoLeG-R represents just one aspect of our evaluation. Both CoLeG-E and accuracy across all rounds have shown improvement.\\n\\n\\nWe hope our response will address your concerns. If you have any further questions, feel free to discuss with us!\\n\\nSincerely,\\n\\nAuthors\\n\\n[1] Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., ... & Zhang, D. (2023). Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583.\\n\\n[2] Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., ... & Liu, W. (2023). Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284.\\n\\n[3] Yuan, Z., Yuan, H., Li, C., Dong, G., Lu, K., Tan, C., ... & Zhou, J. (2023). Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825.\\n\\n[4] Tong, Y., Zhang, X., Wang, R., Wu, R., & He, J. (2024). Dart-math: Difficulty-aware rejection tuning for mathematical problem-solving. arXiv preprint arXiv:2407.13690.\"}", "{\"comment\": \"> GSM-hard: \\\"We construct this dataset by replacing the numbers in the questions of GSM8K with larger numbers that are less common.\\\"\\n\\nSo it is mostly real math questions, the question description is natural.\\n\\n\\n> Scaling Relationship on Learning Mathematical Reasoning with Large Language Models\\n\\nNot a peer-reviewed paper. Moreover, I cannot get its resemblance with your work.\\n\\n\\n> Needle in the Haystack for Memory Based Large Language Models\\n\\nNot a peer-reviewed paper.\"}", "{\"title\": \"Response to Reviewer eQsN\", \"comment\": \"Dear Reviewer eQsN,\\n\\nThank you for your time to review our work! We will answer your questions as follows:\\n\\n> The augmented math questions may include contradicting sentences. The augmented math questions may become unsolvable or yield answers that differ from the original ones. Although human evaluations on 200 samples suggest that \\u201c94.5% of questions meet acceptable quality,\\u201d this accuracy may still be inadequate, particularly given that the labels in the GSM8K test set might contain errors. An alternative could be to release these 200 samples as a verified subset of the E-GSM dataset. Reporting CoLeG-E and CoLeG-R results on the 200 samples, both with and without verification, would also be helpful.\\n\\nThank you for your question. The human evaluation criteria are detailed in Appendix A.2. Specifically, any question that includes contradictory sentences or yields a different answer from the original problem is classified as \\\"poor\\\" quality. **As explained in Lines 173\\u2013176**, we employ two heuristics to filter out \\\"bad\\\" extended questions. The specifics of these heuristics can be found in Appendix A.3, while the filtering process is detailed in Appendix A.4. The core idea behind our approach is to use entailment and solvability as metrics to filter out a substantial portion of questions, ensuring that all \\\"bad\\\" questions identified during our human evaluation are eliminated. This screening process explains why the number of questions presented in Table 1 diminishes with each successive round. \\n\\n\\n> In Table 2, the higher results w/D (compared to w/$D_0$) may because the size of D is larger than $D_0$.\\n\\nThank you for pointing this out. We expand $D_0$ to the same size of D by further RFT [1]. The results for Llama-2-7B is given as follows: \\n\\n| Method | CoLeG-E | CoLeG-R | $Acc_0$ | $Acc_1$ | $Acc_2$ | $Acc_3$ | $Acc_4$ |\\n| - | - | - | - | - | - | - | - |\\n| $D_0$ | 20.22 | 66.64 | 58.45 | 49.62 | 42.96 | 40.94 | 38.95 |\\n| expanded $D_0$ | 20.34 | 66.28 | 58.99 | 50.06 | 43.35 | 41.25 | 39.10 |\\n| $D_1$ | **28.09** | **80.97** | **59.44** | **57.57** | **50.92** | **49.44** | **48.13** |\\n\\nWe can see that there is not much improvement and the claim remains. We hypothesize that this is because the set of unique questions remains unchanged; simply applying RFT yields similar solutions, resulting in minimal improvement for SFT. Moreover, the addition of short questions does not substantially enhance performance for E-GSM. \\n\\n\\n> How is E-GSM different from GSM-IC?\\n\\nThank you for your question! E-GSM is different with GSM-IC in the following way: \\n\\n- E-GSM is more challenging (not in terms of difficulty level of problems) than GSM-IC. GSM-IC uses template-based method to insert one irrelevant sentence to GSM8K problems, which initially reduced the performance of earlier LLMs like text-davinci-003. However, as LLMs become more sophisticated, GSM-IC no longer poses a significant challenge. For example, the current version of GPT-3.5-turbo achieves 88.35% accuracy in GSM-IC with 0-CoT (as shown in Table 3). In contrast, our E-GSM extends the context of GSM8K problems to create longer scenarios, which are inherently more challenging. Specifically, the accuracy of GPT-3.5-turbo on the fourth round of E-GSM is only 64.42% with 0-CoT. \\n\\n- Different research focus. GSM-IC explores the impact of introducing a single irrelevant sentence on the mathematical reasoning capabilities of LLMs. In contrast, our research with E-GSM is intended to examine the inconsistency of LLMs when solving extended math problems of the same difficulty level, as motivated by our discussion in Section 2.1. \\n\\nWe hope our response will address your concerns. If you have any further questions, feel free to discuss with us!\\n\\nSincerely,\\n\\nAuthors\\n\\n\\n[1] Yuan, Z., Yuan, H., Li, C., Dong, G., Lu, K., Tan, C., ... & Zhou, J. (2023). Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825.\\n\\n\\n.\"}", "{\"metareview\": \"This paper presents E-GSM, a collection of math word problems that feature lengthy narratives and then propose two novel metrics to evaluate whether current LLMs can handle these problems. They evaluate several proprietary LLMs and some open source LLMs to see how they perform on this collection. They also fine tune the open source models to perform better on these tasks.\", \"strengths\": \"1. Contribution of a new dataset.\\n2. Analysis of various LLMs on longer MWPs.\\n3. Significant number of experiments.\\n4. New metrics: CoLeG-E and CoLeG-R.\", \"weaknesses\": \"1. More deeper analysis of why LLMs have issues with such longer MWPs.\\n2. Answer extraction using GPT3.5 (there has been discussion about this).\\n3. There are some writing improvement suggestions.\", \"additional_comments_on_reviewer_discussion\": \"There has been a lot of discussion between the authors and the reviewers for this paper. I decided to ignore the review of Dh39 since I did not find the points very pertinent to a fair evaluation of the paper.\"}", "{\"comment\": \"By the way, the edit time is before your response time. If you complain about there is no editing notification, why not email PCs about this issue?\"}", "{\"comment\": \"In your first response, it was mentioned that \\\"we have added one strong model to do the same analysis in Appendix F in our revised manuscript.\\\", which was GPT-4o-mini. I expected GPT-4o to be used, as we normally think GPT-4o has stronger reasoning capability than GPT-4o-mini. Moreover, I would like to see how GPT o1 performs here. In the new experiment with GPT-4o-mini in Figure 9, compared with Figure 1, the length gap between the False and True groups becomes much smaller.\\n\\nIn Table 3, the average number of tokens of MAWPS, SVAMP, and GSM-IC are 52, 54, 80, respectively. However, in your E-GSM, the average length of Q_1 questions is about 192, and more than 300 from Q_2. I do not think there is a strong rationale to believe it solved my original concerns.\"}", "{\"comment\": \"A quick question first. If \\\"It is also unreasonable to require results on benchmarks with the exact same tokens as E-GSM, as existing MWPs benchmarks are not that long.\\\", what's the point of make such long and verbose questions in E-GSM?\"}", "{\"comment\": \"But 7,473 is already the GSM8K training set, right? Which else is the original training set?\"}", "{\"comment\": \"Thanks for your response, but unfortunately, the authors did not directly answer or solve most of my questions and concerns.\", \"for_example\": [\"the gap between the generated verbose questions and the real-word long questions;\", \"why cannot use GPT4 for the experiment in this submission, given that the deadline for ICLR 25 is Oct 1st, 2024, far after the release of GPT4;\", \"......\", \"I would suggest the authors respond again ASAP.\"]}", "{\"comment\": \"Additional reasons:\\n\\n- As existing MWPs benchmarks are not that long, how could we test the performance on long MWPs? That is why we resort to transforming existing benchmarks (GSM8K) to get a new testbed. We are studying a limitation of current LLMs, not releasing a benchmark.\\n\\n- By your point, if a benchmark not exists, there is no need to study this field? In fact, there are many endeavor [1, 2] that adapt existing benchmarks to study specific research questions. They are no real math problems occurring in the way in [1, 2]. However, it is still worthwhile to do so because we expect our LLMs becomes stronger and stronger and could handle any unreal case. Another unreal case should be [3]. One of the reasons to conduct these is to inspect LLMs' ability from different aspects. Our research falls into this category.\\n\\n\\nDo you receive notification this time?\\n\\n[1] https://huggingface.co/datasets/reasoning-machines/gsm-hard\\n\\n[2] Large Language Models Can Be Easily Distracted by Irrelevant Context. ICML 2023. https://arxiv.org/abs/2302.00093\\n\\n[3] https://github.com/gkamradt/LLMTest_NeedleInAHaystack\"}", "{\"comment\": \"Thank you for your engagement in further discussion!\\n\\n> I expected GPT-4o to be used, as we normally think GPT-4o has stronger reasoning capability than GPT-4o-mini. Moreover, I would like to see how GPT o1 performs here. \\n\\nSorry about the confusion. The reason we use GPT4o-mini is: 1. It already achieves over 93% in GSM8K, which is strong on GSM8K. 2. It is cheaper than GPT4o and in the same series. \\n\\nAs you suggested, we are now adding GPT-4o and o1. We will let you know when it is done.\\n\\n\\n> In the new experiment with GPT-4o-mini in Figure 9, compared with Figure 1, the length gap between the False and True groups becomes much smaller.\\n\\nEven for a strong model like GPT-4o-mini (on GSM8K, over 93% accuracy), **there is still a gap between the False and True groups**, which is indeed a limitation in math reasoning (under statistical significance). \\nThe interesting thing is that **this phenomenon aligns well with our results in Table 2 and analysis in Section 4.2**, the stronger LLM tends to have better performance on E-GSM in our metrics, which also **justifies the reasonability of our design principal of E-GSM**. \\nWe believe the usefulness of E-GSM is to enlarge this performance gap among different LLMs that may be ignored in the original GSM8K and show the rankings of LLMs in this perspective.\\n\\n\\n\\n> In Table 3, the average number of tokens of MAWPS, SVAMP, and GSM-IC are 52, 54, 80, respectively. However, in your E-GSM, the average length of Q_1 questions is about 192, and more than 300 from Q_2. I do not think there is a strong rationale to believe it solved my original concerns.\", \"i_think_the_main_logic_of_our_work_is\": \"we discern a limitation in Section 2.1, develop a test bed to discuss and investigate this problem (build up E-GSM), propose our methods and show their efficacy in E-GSM, then we return to real-world problems (Table 3 and Section 4.2)\\n**Additional evidence could be the second paragraph (Lines 453-463) in Section 4.2 and Figure 5 (right)**, where our method can improve the performance of relatively longer real-world problems in GSM8K (83-203 tokens).\\nAnother reason is that the current MWP benchmarks are somewhat short, and that is why we develop a new test bed to investigate our research problem.\"}", "{\"comment\": \"Our manuscript is updated to incorporate GPT4o and o1 in Appendix F.\\n\\nThe results also show the context length is still a problem for these strong models.\"}", "{\"title\": \"Response to Reviewer cYhv\", \"comment\": \"Dear Reviewer cYhv,\\n\\nThank you for your time to review our work! Your feedback is really thoughtful and meaningful. We are sorry that we have tried our best and are only able to answer some of your questions as follows:\\n\\n> The paper lacks a detailed exploration of why longer contexts impact LLM performance. While the authors mention potential working memory limitations, a deeper analysis could provide valuable insights. For instance, examining how performance correlates with the models' context window sizes or investigating the behavior of attention patterns in different layers could shed light on where breakdowns occur. Additionally, analyzing how different positional encoding schemes (e.g., rotary position embeddings vs. absolute position embeddings) affect performance on longer MWPs could offer insights into architectural considerations for improving CoLeG.\\n\\nThank you for your insightful comments! In fact, we have tried to investigate why longer contexts impact LLM performance and included a fine-grained analysis in Section 4.3, where we use two different metrics to capture the semantic understanding and missing steps in math reasoning. We find that both of these two are influenced by longer context. \\n\\nThank you for pointing out alternative ways to analyze from attention patterns, positional encoding schemes, and context window sizes. \\n\\n- We believe analyzing attention patterns is interesting, but it is difficult. Even EMNLP best paper [1] only analyzes the pattern for classification tasks. The pattern and intrinsic attention mechanism is still open research questions. It is really a thoughtful comment and points to an interesting future work. \\n\\n- About the context windows. As the context window sizes are decided at the pretraining stage, we cannot afford to do such experiments. Additionally, if the input text is longer than the context window, the LLMs cannot \\u201csee\\u201d the tokens that exceed the max input length. However, our work focuses on the impact of the context if LLMs \\u201csee\\u201d the entire problem. \\n\\n- About positional encoding schemes. Similarly, positional encoding schemes are also decided at the pretraining stage. Additionally, we have incorporated LLMs with different positional encoding schemes in our experiments. \\n\\n> The evaluation of open-source LLMs is limited to LLaMA-2 and Mistral-7B families. To provide a more comprehensive assessment, the authors should consider including models specifically designed for mathematical reasoning, such as MathGPT, GPT-f, or MetaMath. Additionally, evaluating performance on models with different architectural choices, like PaLM or BLOOM, could offer insights into how various model designs handle longer MWPs. This broader evaluation would strengthen the claims about the generalizability of the proposed methods.\\n\\nThank you for your suggestion! In Section 4.4, we include experiments with MetaMath. As suggested, we have included experiments about some specialized math LLMs in Appendix C.3 in our updated version.\\n\\n> The use of GPT-3.5-turbo for answer extraction in the evaluation process introduces a potential confounding factor. The paper doesn't adequately address how this might impact results, especially for non-OpenAI models. The authors should consider comparing this extraction method with simpler rule-based approaches or using model-specific output parsing to ensure fair comparison across different LLM families.\\n\\nThank you for your question! We have already discussed this issue in Appendix B.4. The fact is that simple rule-based parsing is not accurate. For example, zero-shot-cot will use the last number as the final answer if they fail to parse the answer from the pattern \\u201cthe answer is\\u201d. We randomly select 50 cases and find that the last number is not the answer for 9 out of 50 cases. There is no clear answer pattern for general-purpose LLMs (different from specialized math LLMs), that is why we use gpt-3.5-turbo to extract the answers. It is also a common practice to use LLMs as answer extractor [2]. \\n\\n> The extension approach shows promise for open-source LLMs. Have you considered how this might be adapted for extremely long MWPs or multi-step reasoning problems that span multiple pages?\\n\\nThank you for your recognition of our approach. Current MWPs are not that long and our SFT approach has shown great potential to create synthesized training data from short MWPs that is suitable for extremely long MWPs. \\n\\nWe hope our response have addressed some of your concerns. We appreciate the opportunity to engage in discussion with a thoughtful reviewer like you. If you have any additional comments or would like to discuss further, please feel free to reach out to us.\\n\\n\\nSincerely,\\n\\nAuthors\\n\\n[1] Label words are anchors: An information flow perspective for understanding in-context learning. arXiv preprint arXiv:2305.14160.\\n\\n[2] Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255.\"}", "{\"comment\": [\"Thank you for your reply.\", \"We still believe your major concern is not a problem, the reasons are as follows:\", \"Our title should be \\\"Can LLMs Solve Longer Math Word Problems Better\\\" (in PDF), which highlight out research focus: investigating the effect of longer context on solving MWPs. E-GSM serves as a good testbed as it isolates the effect of difficulty level and check the performance discrepancy of the same problems with longer and longer context.\", \"About the discrepancy of characteristics.The main results have shown the performance drop of LLMs in E-GSM, which aligns well with the real-world GSM8K (in Section 2.1, which also serves as our motivation). Results of our methods improve the performance in E-GSM and also show superior results in many real-world MWP benchmarks. We believe these two points have shown the reliability and reasonability of using E-GSM.\", \"About the question length difference. The second paragraph (Lines 453-463) in Section 4.2 and Figure 5 (right) have already shown that our method can improve the performance of relatively longer real-world problems in GSM8K (83-203 tokens). Additionally, current MWP benchmarks do not include examples with very long contexts. Introducing such an benchmark could deserve a paper in \\\"dataset and benchmark\\\" track. That is why we resort to transforming existing benchmarks (GSM8K) to get a new testbed. We are studying a limitation of current LLMs, not releasing a benchmark. We believe our work is a good trail in this direction. It is also unreasonable to require results on benchmarks with the exact same tokens as E-GSM, as existing MWPs benchmarks are not that long. Additionally, we have revealed that the main improvement of our approach is from \\\"our methods could improve the performance of relatively longer MWPs\\\", which also aligns well with the word in our title \\\"longer\\\".\"]}", "{\"summary\": \"This paper investigates the performance of LLMs on Math Word Problems with extended narratives, introducing the concept of Context Length Generalizability (CoLeG). The authors created a new dataset, Extended Grade-School Math (E-GSM), by iteratively extending problems from GSM8K. They propose two novel metrics, CoLeG-E and CoLeG-R, to evaluate efficacy and robustness respectively. The study reveals that existing LLMs struggle with longer MWPs, showing a consistent performance decline as context length increases. To address this, the authors introduce Condition-Retrieving Instruction (CoRe) for proprietary LLMs and an extension-based fine-tuning approach for open-source LLMs. These methods demonstrate improvements in CoLeG across various LLM types and generalize well to other MWP benchmarks as well.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper addresses a gap in current research by focusing on LLMs' ability to handle longer MWPs, which is more reflective of real-world mathematical reasoning tasks. The focus on CoLeG provides insights into the limitations of current LLMs and pathways for improvement.\\n\\n2. The creation of the E-GSM dataset through a systematic extension process is another contribution. By maintaining problem difficulty while increasing context length, the authors have developed a framework for evaluating LLM performance on longer MWPs.\\n\\n3. The introduction of CoLeG-E and CoLeG-R metrics offers a more comprehensive evaluation framework than traditional accuracy measures. These metrics provide insights into both the consistency and robustness of LLM performance across varying context lengths.\\n\\n4. The proposed methods, CoRe and extension-based fine-tuning, show consistent improvements across different LLM types and generalize well to other benchmarks.\", \"weaknesses\": \"1. The paper lacks a detailed exploration of why longer contexts impact LLM performance. While the authors mention potential working memory limitations, a deeper analysis could provide valuable insights. For instance, examining how performance correlates with the models' context window sizes or investigating the behavior of attention patterns in different layers could shed light on where breakdowns occur. Additionally, analyzing how different positional encoding schemes (e.g., rotary position embeddings vs. absolute position embeddings) affect performance on longer MWPs could offer insights into architectural considerations for improving CoLeG.\\n\\n2. The E-GSM dataset creation process, while systematic, may introduce biases that aren't adequately addressed. Using GPT-4 for extensions could potentially lead to biases in language style, problem structure, or even subtle cues that GPT-4 uses for reasoning. For example, GPT-4 might consistently use certain phrases or sentence structures that inadvertently serve as hints for other GPT models. Additionally, there's a risk of amplifying any biases present in the original GSM8K dataset. The authors should consider analyzing the distribution of problem types, linguistic patterns, and solution strategies in E-GSM compared to the original dataset to identify any systematic biases introduced during extension.\\n\\n3. The evaluation of open-source LLMs is limited to LLaMA-2 and Mistral-7B families. To provide a more comprehensive assessment, the authors should consider including models specifically designed for mathematical reasoning, such as MathGPT, GPT-f, or MetaMath. Additionally, evaluating performance on models with different architectural choices, like PaLM or BLOOM, could offer insights into how various model designs handle longer MWPs. This broader evaluation would strengthen the claims about the generalizability of the proposed methods.\\n\\n4. While the paper shows improvements on other MWP benchmarks, it doesn't explore how the proposed methods perform on problems significantly longer than those in E-GSM. This leaves questions about the scalability of the approaches to even more complex, multi-page word problems. The authors could consider creating a small set of extremely long MWPs (e.g., 1000+ tokens) to test the limits of their methods and provide insights into scaling challenges.\\n\\n5. The use of GPT-3.5-turbo for answer extraction in the evaluation process introduces a potential confounding factor. The paper doesn't adequately address how this might impact results, especially for non-OpenAI models. The authors should consider comparing this extraction method with simpler rule-based approaches or using model-specific output parsing to ensure fair comparison across different LLM families.\", \"questions\": \"1. How does the performance degradation on longer MWPs correlate with specific architectural features of different LLMs, such as context window size or attention mechanisms?\\n\\n2. The extension approach shows promise for open-source LLMs. Have you considered how this might be adapted for extremely long MWPs or multi-step reasoning problems that span multiple pages?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
C9YyVygCpG
Optimal Algorithm for Max-Min Fair Bandit
[ "Zilong Wang", "Zhiyao Zhang", "Shuai Li" ]
We consider a multi-player multi-armed bandit problem (MP-MAB) where $N$ players compete for $K$ arms in $T$ rounds. The reward distribution is heterogeneous where each player has a different expected reward for the same arm. When multiple players select the same arm, they collide and obtain zero reward. In this paper, we aim to find the max-min fairness matching that maximizes the reward of the player who receives the lowest reward. This paper improves the existing regret upper bound result of $O(\log T\log \log T)$ to achieve max-min fairness. More specifically, our decentralized fair elimination algorithm (DFE) deals with heterogeneity and collision carefully and attains a regret upper bounded of $O((N^2+K)\log T / \Delta)$, where $\Delta$ is the minimum reward gap between max-min value and sub-optimal arms. We assume $N\leq K$ to guarantee all players can select their arms without collisions. In addition, we also provide an $\Omega(\max\{N^2, K\} \log T / \Delta)$ regret lower bound for this problem. This lower bound indicates that our algorithm is optimal with respect to key parameters, which significantly improves the performance of algorithms in previous work. Numerical experiments again verify the efficiency and improvement of our algorithms.
[ "multi-player multi-armed bandits", "max-min fariness" ]
Reject
https://openreview.net/pdf?id=C9YyVygCpG
https://openreview.net/forum?id=C9YyVygCpG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yDWfkNyeJO", "y6wp4eqfVY", "uBECTYBC9r", "rHDYP0JmQv", "mOqXrSWjuc", "fEPgzszAOE", "dBzMOHcLCF", "U7fGQtiWoG", "Pt7iiNgcr9", "OyftN79W4n", "ObmZyo6Kwd", "IFLjqBmzEr", "HBg9HHZD60", "DzSqqlFltl", "9jmI4owQuB", "2NUQK0coed" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1737523484491, 1732635291230, 1732115184832, 1732506760934, 1734686388049, 1730640517728, 1730689807747, 1732756598868, 1732583790850, 1732115782920, 1732114620697, 1732114825882, 1730617193184, 1733027651844, 1732645369971, 1731459890124 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2079/Reviewer_ic7z" ], [ "ICLR.cc/2025/Conference/Submission2079/Authors" ], [ "ICLR.cc/2025/Conference/Submission2079/Authors" ], [ "ICLR.cc/2025/Conference/Submission2079/Area_Chair_uMo6" ], [ "ICLR.cc/2025/Conference/Submission2079/Reviewer_fsnh" ], [ "ICLR.cc/2025/Conference/Submission2079/Reviewer_RJ7b" ], [ "ICLR.cc/2025/Conference/Submission2079/Authors" ], [ "ICLR.cc/2025/Conference/Submission2079/Reviewer_fsnh" ], [ "ICLR.cc/2025/Conference/Submission2079/Authors" ], [ "ICLR.cc/2025/Conference/Submission2079/Authors" ], [ "ICLR.cc/2025/Conference/Submission2079/Authors" ], [ "ICLR.cc/2025/Conference/Submission2079/Reviewer_ic7z" ], [ "ICLR.cc/2025/Conference/Submission2079/Authors" ], [ "ICLR.cc/2025/Conference/Submission2079/Reviewer_FPek" ], [ "ICLR.cc/2025/Conference/Submission2079/Reviewer_FPek" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Re: Author Response\", \"comment\": \"Thank you for your response. It effectively addresses all of my concerns, and I have adjusted my score accordingly. I recommend that the authors also incorporate the clarification regarding the inequality mentioned in line 698 into the revised paper.\"}", "{\"title\": \"Response to reviewer fsnh\", \"comment\": \"We thank the reviewer for your valuable and detailed comments. Please see our response below.\\n\\n1. Does this work demonstrate advantages over others in terms of factors such as\\u00a0$N$\\u00a0or\\u00a0$K$?\\n\\nAs the dependence of $N$ and $K$ in previous works, we note that the work of Leshem (2023) attains $O(N^3 \\\\log T \\\\log\\\\log T)$ regret, where they assume $N=K$. And in the work of Bistritz et al. (2020), they can only get $O(\\\\exp(N, K) \\\\log T \\\\log\\\\log T)$ regret. Therefore our $O((N^2 + K) \\\\log T / \\\\Delta)$ regret not only gets the improvement over the term $T$, but also improves the dependence on term $N$ and $K$.\\n\\nAdditionally, we highlight that the improvement over $T$ is not only reflected in removing the term $\\\\log \\\\log T$, but also removing a very large constant before the leading term, which could be exponentially large with $1 / \\\\Delta, N, K$. This is because previous works Bistritz et al. (2020); Leshem (2023) provide an explore-then-commit (ETC) method at each epoch $s$. Specifically, they let each player explores each arm $\\\\log s$ times at the beginning of the epoch $s$, and then compute the max-min matching based on history observations in exploration phase. After that each player follows this matching in the following $2^s$ rounds. Their algorithms both only obtain an $O(\\\\log T \\\\log \\\\log T)$ regret bound since they have to set an increasing length of exploration at each epoch. This design is to make sure the probability of computing a wrong max-min matching is bounded by $\\\\exp(\\u2212s)$ when $s$ is sufficiently large that $\\\\log s > 1/\\\\Delta$, then the regret in the exploitation phase can be bounded. This design also raises the problem of a large constant to guarantee $\\\\log s>1/\\\\Delta$, which requires initial warm-up rounds is $O(\\\\exp(1/\\\\Delta))$, which could be very large when $\\\\Delta$ is small enough. We handle this problem by applying the elimination method which eliminate sub-optimal player-arm pair efficiently. This assures that no forced explorations will happen in later epochs.\\n\\n2, Can the authors provide a rigrous upper bound for the communication cost in this work and discuss the possibility to make it optimal? Did the work improve the communication cost compared to previous work?\\n\\nThanks for pointing out the communication cost in our algorithm. Here we give a rigrous analysis for it. If the minimum reward gap between max-min value $\\\\gamma^\\\\ast$ is $\\\\Delta$, then the length of each communication phase is bounded by $N \\\\log (1/\\\\Delta)$. Here $\\\\log (1 / \\\\Delta)$ is the length of transmitting a reward\\u2019s information by bit and through collisions. We only need the bit length with $\\\\log (1 / \\\\Delta)$ since it is enough to distinguish two pairs with gap larger than $\\\\Delta$.\\n\\nThen the total communication cost is $N \\\\log T \\\\log (1 / \\\\Delta)$. We also note that the communication cost in Leshem (2023) is $\\\\frac{3}{2} N^3 \\\\log (1/\\\\Delta) \\\\log T$, and the communication cost in Bistritz et al. (2020) is $\\\\exp{N, K}$. Additionally, we note that $\\\\Delta$ in their works is the minimum reward gap among all player-arm pairs, whereas in our work $\\\\Delta$ is only the minium reward gap between $\\\\gamma^\\\\ast$. Thus we also significantly improve the communication cost compared with previous works.\\n\\n We believe that if the algorithm has to convey information of reward\\u2019s estimation, then our algorithm is optimal since we make players communication with each other by bit and through collisions, which is the most efficient way to communication as far as we know. We leave it as an interesting future work to design an algorithm with minimum communication cost.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewers:\\n\\nWe thank you once again for your careful reading of our paper and your constructive comments and suggestions. We would appreciate it if you could let us know whether all your concerns are addressed. We are also happy to answer any further questions in the remaining discussion period.\\n\\nBest, Authors.\"}", "{\"metareview\": \"This paper looks at a variant of the multi-player multi-armed bandit setting where the objective is to minimize some min-max regret instead of the usual cumulative one.\\n\\nThe reviewers and I are not that thrilled by the results and the algorithm, as they are not straightforward, but relatively similar than the existing ones in the literature. \\nIt is true that this paper improves a log(T)loglog(T) bounds into a log(T) regret bound, but this does not necessarily says that achieving it was difficult (rather than questioning the quality of the first paper).\\n\\nAll in all, I do not think this paper reaches the ICLR bar, even though it is, as far as I can see, correct and midly interesting.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers were lukewarm, and none of them decided to champion this paper for acceptance. As I was not thrilled either, the conclusion was clear\"}", "{\"summary\": \"This paper addresses the multi-player multi-armed bandit (MP-MAB) problem, with the goal of finding a max-min fairness matching that maximizes the reward of the player receiving the lowest reward. The authors propose a decentralized fair elimination (DFE) algorithm to handle heterogeneous reward distributions and collisions between players. The algorithm improves the regret upper bound from $O(\\\\log T \\\\log \\\\log T)$ to $O\\\\left(\\\\left(N^2+K\\\\right) \\\\log T / \\\\Delta\\\\right)$, where $\\\\Delta$ is the minimum reward gap. Additionally, they provide a regret lower bound, demonstrating that the algorithm is optimal concerning key parameters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors design a new phased elimination algorithm that improves max-min fairness by adaptively eliminating suboptimal arms and exploring the remaining ones. The algorithm achieves a regret upper bound of $\\\\mathrm{O}\\\\left(\\\\left(\\\\mathrm{N}^2+\\\\mathrm{K}\\\\right) \\\\log \\\\mathrm{T} / \\\\Delta\\\\right)$, which outperforms existing results.\\n2. The paper derives a tighter regret lower bound of $\\\\Omega(\\\\max (\\\\{N^2, K\\\\}) \\\\log T / \\\\Delta)$, which considers the parameters $N$, $K$, and $\\\\Delta$, improving upon prior work.\\n3. Numerical experiments confirm the effectiveness of the DFE algorithm across different settings.\", \"weaknesses\": \"1. I appreciate the contribution of this work, which presents the first optimal result for the MP-MAB problem, aligning with the lower bound established here. However, compared to earlier studies, particularly those by Bistritz et al. (2020) and Leshem (2023), this study, which employs a classic elimination-based strategy, only improves the regret results by a factor of $\\\\log\\\\log T$. This improvement may not be particularly significant for smaller values of $T$. Does this work demonstrate advantages over others in terms of factors such as $N$ or $K$?\\n\\n2. Recently, several studies have reported intriguing results on reducing communication costs in the MP-MAB problem for both competitive and cooperative settings. I believe the authors could strengthen this study by incorporating a communication-efficient algorithm. Can the authors provide a rigrous upper bound for the communication cost in this work and discuss the possibility to make it optimal? Did the work improve the communication cost compared to previous work?\", \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper considers max-min fair bandit, an important variant of multi-player multi-armed bandit problem where fairness means to maximize the reward of the player who receives the lowest reward. Existing work in max-min fair bandit suffer from large regret and heavy assumptions. The authors give tight regret bound for the bandit problem that is optimal with respect to all parameters. Special case for the lower bound is provided. The work closes the gap of the man-min fair bandit problem.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper fills an important gap in existing max-min bandit literature. Tight regret bound is proved and special case is provided to demonstrate the lower bound. The improvement is significant compared to existing work. The regret bound is tight in all parameters. The algorithm is splitted into three phases with detailed algorithmic and graphical illustrations.\", \"weaknesses\": \"The main section of the decentralized fair elimination algorithm is a bit hard to read. It would be good if the authors can highlight the novelty part of the algorithm and clearly demonstrate how the three steps of the main algorithm contributes to the regret and which dominates the regret. The elimination phase is a commonly used strategy. The exploration phase is new, but it does not seem to directly contribute to the improvement of overall regret. The communication phase is also seen in the matching bandit literature.\", \"questions\": \"\\u2022 Line 90 What is the doubling trick?\\n\\u2022 Line 242 about the second type of elimination \\u2013 unclear. How to determine the assertion of \\u201cwith high probability\\u201d? \\n\\u2022 If there is no elimination phase, how the regret is affected? Line 310 is confusing. \\n\\u2022 Can you give intuitive idea / sketch proof to Thm 1, which is the key result for the whole paper, to explain how the three steps contribute to the regret bound?\\n\\n-------------------------After Rebuttal-----------------------------\\nThank you for the response. This helps me better understand the technical contribution of the paper. I see it's a theoretically interesting paper. I increased my confidence in my assessment.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewers:\\n\\nWe appreciate the your insightful thoughts, and we will definitely integrate these comments and discussions into our next version. Thanks!\\n\\nBest, Authors\"}", "{\"comment\": \"Thanks for the detailed explanation. I have raised my score.\"}", "{\"title\": \"Response to reviewer ic7z\", \"comment\": \"1. Additional term in communication cost.\\n\\nWe thank the reviewer for pointing out this additional $\\\\log 1/\\\\Delta$ term in the communication cost. It is correct if we do not make the communication assumption. We will add this in the discussion about communication phase. The total communication cost is $N \\\\log T \\\\log (1 / \\\\Delta)$, which still does not affect the leading term $O((N^2+K)\\\\log T / \\\\Delta)$.\\n\\n2. Could the authors elaborate on why the inequality in line 98 holds?\\u00a0\\n\\nI guess you are asking the inequality in line 698, which is $2^s \\\\leq 24 \\\\log T / \\\\Delta_{i,k}^2$. First, we analyze the pair $(i,k)$ which is not eliminated at epoch $s-1$, which means it has been selected at least $2^s$ number of times. Then conditioned on good event $\\\\neg\\\\mathcal{F}$, we have that\\n\\n $\\\\|\\\\hat{\\\\mu}\\\\_{i,k} (s) - \\\\mu\\\\_{i,k} \\\\| \\\\leq \\\\sqrt{\\\\frac{6\\\\log T}{2^s}} $.\\n\\n Moreover, since the optimal player-arm pair (i\\u2019, k\\u2019) with max-min reward $\\\\gamma^\\\\ast$ is not eliminated, it is also selected at least $2^s$ number of times, we have that\\n\\n $| \\\\hat{\\\\mu}\\\\_{i\\u2019,k\\u2019}(s) - \\\\mu\\\\_{i\\u2019,k\\u2019} | \\\\leq \\\\sqrt{\\\\frac{ 6 \\\\log T}{2^s}}$.\\n\\nSince sub-optimal pair $(i, k)$ is not eliminated, we have that $2 \\\\sqrt{\\\\frac{6\\\\log T}{2^s}} \\\\geq \\\\mu_{i\\u2019,k\\u2019} - \\\\mu_{i,k} := \\\\Delta_{i,k}$. Otherwise it must hold that $UCB_{i,k}(s) < LCB_{i\\u2019,k\\u2019}(s)$ and (i,k) will be eliminated after $s-1$ epoch. Rearranging the terms in the inequality in line 698.\\n\\nThe definition of $\\\\Delta_{i,k}$ is meaning for since we only analyze those sub-optimal pair (i, k) with $\\\\mu_{i,k} < \\\\gamma^\\\\ast$, which guarantees the $\\\\Delta_{i,k}$ is always positive. Recall that from the definition of $\\\\gamma^\\\\ast$ we know that the minimum reward in any matching can never greater than $\\\\gamma^\\\\ast$, thus we only care about the number of times of selecting those sub-optimal pairs $(i,k)$ with $\\\\mu_{i,k} < \\\\gamma^\\\\ast$.\"}", "{\"title\": \"Response to reviewer FPek\", \"comment\": \"We thank the reviewer for your valuable and detailed comments. Please see our response below.\\n1. Does the upper bound match the lower bound?\\n\\nIn this paper, we propose the algorithm attaining the upper bound $O((N^2+K) \\\\log T / \\\\Delta)$ and provide the analysis with lower bound $\\\\Omega(\\\\max(N^2,K) \\\\log T / \\\\Delta)$. Here we confirm that these two bounds are exactly in the same order, since we can see that $\\\\max(N^2, K) \\\\log T / \\\\Delta \\\\leq (N^2+K) \\\\log T / \\\\Delta \\\\leq 2\\\\max(N^2, K) \\\\log T / \\\\Delta $. This shows that the upper bound exactly matches the lower bound with respect to term $N, K, T, \\\\Delta$. Therefore, we indeed design an optimal algorithm for this problem.\\n\\n2. Is the $O(\\\\log T \\\\log\\\\log T)$ regret bound of Bistritz et al. (2020) analyzed against the max-min regret the same as defined in Section 2?\\n\\nWe study the exact same setting as in the work of Bistritz et al. (2020). Moreover, we do not assume each player must have different rewards over arms like Bistritz et al. (2020). Thus we analyze a more general setting and the definition of regret is the same as those compared works.\\n\\n3. Is there any hyperparameter for the proposed algorithm or the benchmarks the authors set in the experiments?\\n\\nIndeed our algorithm is parameter-free, thus it is easy to follow our algorithm\\u2019s description to reproduce the same performance. The hyperparameters of the benchmarks are the same as they stated in their paper, and we will restate them in the updated version.\\n\\n4. Thanks for pointing out these unclear notations, we will fix them in the updated version.\"}", "{\"title\": \"Response to reviewer Rj7b\", \"comment\": \"We thank the reviewer for your valuable and detailed comments. Please see our response below.\\n\\n1. Highlight the novelty part of the algorithm and clearly demonstrate how the three steps of the main algorithm contributes to the regret and which dominates the regret. Give intuitive idea / sketch proof to Thm 1, which is the key result for the whole paper, to explain how the three steps contribute to the regret bound?\\n\\nWe highlight that the novelty part of our proposed algorithm is the exploration phase with carefully designed exploring matching set given those non-eliminated player-arm pairs. More specifically, we improve the total exploration times in one cycle (explore all non-eliminated player-arm pairs) from a naive bound $NK$ to the optimal bound $N^2 + K$. This is also the key to match the lower bound. Elimination phase guarantees no sub-optimal player-arm pair will be selected too many times, and the optimal design of exploration set guarantees the difference in the number of times pairs are selected in the exploration set will not be too large. These two designs lead to the final optimal regret bound.\\n\\n2.\\u00a0Line 90 What is the doubling trick?\\n\\n\\u2018\\u2019Doubling trick\\u2019\\u2019 means the length of exploration phase increases doubles each time. By this design we can control the total communication times by $O(\\\\log T)$, and ensure that the number of additional explorations does not exceed twice the necessary number of explorations.\\n\\n3.Line 242 about the second type of elimination \\u2013 unclear. How to determine the assertion of \\u201cwith high probability\\u201d?\\n\\nThe second type of elimination means (j, k) will not occur in the optimal matching set if (j, k) does not exist in any matching set with UCB greater than $\\\\underline{\\\\gamma}_s$, and thus it will be eliminated. Here \\u201cwith high probability\\u201d means if UCB is greater than $\\\\underline{\\\\gamma}_s$, then with high probability the minimum reward of the given matching is smaller than the optimal max-min reward.\\n\\n4.\\u00a0If there is no elimination phase, how the regret is affected? Line 310 is confusing.\\n\\nIf there is no elimination phase, all player-arm pairs will be selected the same number of times. Denote the minimum reward gap between the pair and max-min reward $\\\\gamma^\\\\ast$ as $\\\\Delta$, and the gap between $\\\\gamma^\\\\ast$ and a sub-optimal pair $(i, k)$ is $\\\\Delta_{i, k}$. Then the number of times of selecting $(i, k)$ is $O(\\\\log T / \\\\Delta^2)$ without elimination phase, and the regret caused by selecting $(i, k)$ is $\\\\Delta_{i, k} O(\\\\log T / \\\\Delta^2)$, which can only be bounded by $ O(\\\\log T / \\\\Delta^2)$ since $\\\\Delta < \\\\Delta_{i, k}$. However, if we utilize the elimination phase, the number of times of selecting $(i, k)$ can be bounded by $O(\\\\log T / \\\\Delta_{i, k}^2)$, and thus the regret caused by selecting $(i, k)$ is $O(\\\\log T / \\\\Delta_{i, k})$. In general, by elimination phase, we can improve the regret by $O(1 / \\\\Delta)$.\"}", "{\"summary\": \"The paper presents a theoretical study for the Multi-Player Multi-Armed Bandit (MP-MAB) problem with a max-min fairness objective. In this scenario, multiple players choose from a set of arms, and collisions between players result in zero rewards. The goal is to maximize the reward for the player with the lowest reward. The authors propose a decentralized algorithm called Decentralized Fair Elimination (DFE), which improves the existing regret upper bound from $O(\\\\log T \\\\log \\\\log T)$ to $O({\\\\log T}/ {\\\\Delta})$. Additionally, a matching lower bound is provided, demonstrating the optimality of the proposed algorithm. The effectiveness of the algorithm is also verified through numerical experiments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The DFE algorithm addresses fairness without requiring a central coordinator, making it scalable to larger systems and addressing practical concerns in decentralized settings like wireless networks.\\n\\n2. The algorithm proposes a novel exploration assignment strategy that ensures even exploration across player-arm pairs, which leads to efficient elimination of sub-optimal arms and reduces overall regret.\\n\\n3. The authors provide both a theoretical analysis (including regret upper and lower bounds) and empirical validation through simulations.\", \"weaknesses\": \"1. In remark 1, the authors claim that \\\"we could still use collisions to transmit information bit by bit, resulting in an additional constant length of information bits without the communication assumption.\\\" However, using collisions to transmit information, players should quantize UCB/LCB estimates to avoid potentially infinite communication length upon communication, as these numbers are often decimal numbers. To ensure that the elimination phase remains work despite the quantization errors, the required communication length would need to be on the order of $O(\\\\log 1/\\\\Delta)$. Given that the number of epochs is $O(\\\\log T)$, this results in an additional term of $O((\\\\log T \\\\log 1/\\\\Delta))$ in the regret bound.\\n\\n2. Could the authors elaborate on why the inequality in line 698 holds? The paper does not provide any explanation for this, and additional details would be greatly appreciated. Additionally, I am unsure if the definition of $\\\\Delta_{i,k} = \\\\gamma^* - \\\\mu_{i,k}$ is meaningful, as this value can be non-positive.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your feedback and for updating us on the score. We genuinely appreciate your insightful and constructive comments. Please let us know if you have any further concerns or questions, we will be delighted to clarify them.\"}", "{\"comment\": \"The authors' response addressed my concern. I have raised the rating. I would encourage the authors to add the above discussion to the paper to enhance clarity.\"}", "{\"summary\": \"This work studies a multi-player multi-armed bandit problem with heterogeneous reward and collision.\\nThis paper aims to find a fair bandit algorithm that matches each player to a distinct arm while maximizing the reward of the player who receives the smallest reward. \\nThis paper provides a max-min regret lower bound of $\\\\Omega(\\\\max(N^2, K) \\\\log T/\\\\Delta)$. \\nThe authors propose the decentralized fair elimination (DFE) algorithm that guarantees the exploration of all valid player-arm pairs by constructing matchings, controls the communication times by the doubling trick, and eliminates player-arm pairs whose upper confidence bounds are smaller than the lower confidence bound of the current max-min value.\\nThe authors show that DFE achieves $O((N^2+K)\\\\log T/\\\\Delta)$ max-min regret.\\nThere is also an empirical study of the performance of the proposed algorithm compared to prior decentralized competitive MP-MAB algorithms.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The problem studied (fairness in MP-MAB) is important and interesting.\", \"The algorithmic designs of exploration and elimination procedures are interesting.\", \"This paper conducts both theoretical and empirical studies.\", \"I appreciate the diagrams for algorithmic design illustration.\"], \"weaknesses\": [\"The claim that the proposed algorithm is optimal concerns me because the upper bound $O((N^2+K)\\\\log T/\\\\Delta)$ does not match exactly with the lower bound $\\\\Omega(\\\\max(N^2, K) \\\\log T/\\\\Delta)$ in terms of $N$ and $K$. The proposed algorithm is definitely near-optimal, but I am concerned about claiming it as optimal.\", \"The readability of this paper can be further improved. Notations can be introduced more clearly. For example:\", \"A matching $m$ is first introduced as \\\"matching set\\\" $m(t)$, but later more used as matching $m$ or $m_i$\", \"The definition of regret in this paper is very different from that in multi-armed bandit literature. I would suggest the authors make this point clearer in the paper.\", \"The last sentence on the first page is too long.\", \"$\\\\mathcal{P}$ has always denote player-arm set in the rest of the paper. However, it is used differently in Section 4.\", \"Claim 1 only holds for the instance described in Section 4. This point should be made clear, as in Lemma 1.\", \"Figure 5 is not very color-blind friendly or black-white printable.\"], \"questions\": [\"Is the $O(\\\\log T \\\\log \\\\log T)$ regret bound of Bistritz et al. (2020) analyzed against the max-min regret same as defined in Section 2? If not, it seems unfair to compare with it in the introduction.\", \"Is there any hyperparameter for the proposed algorithm or the benchmarks the authors set in the experiments? If so, please add them to the paper to increase replicability.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
C9DazhfVZR
Generation Network for Echocardiographic Sectional Positioning and Shape Completion
[ "Wenli Dai", "Hao Xu", "Shijie Wang", "xiaorong Chen", "Dexing Kong" ]
The precise localization of 2D echocardiography planes in relation to a dynamic heart necessitates specialized expertise, as existing automated algorithms prmarily classify standard views while lacking the capability for comprehensive 3D structural perception. Traditional measurement techniques have evolved to infer 3D heart geometry, yet recent advancements in artificial intelligence, though demonstrating spatial awareness, still fall short in providing explicit 3D modeling. CTA-based digital twins, while promising, are hindered by cost and radiation concerns. Echocardiography, being cost-effective and radiation-free, remains limited in its ability to provide 3D perception. To address this gap, we introduce a novel point cloud-based weakly supervised 3D generation network specifically tailored for echocardiograms. This network automates 3D heart inference, and biomarker modeling, based on 2D echocardiography, slice tracking. To further enhance accuracy, we integrated a self-supervised learning branch into our framework, introducing multi-structure reconstruction loss and an overall reconstruction loss specifically designed for cardiac structure completion. Additionally, we constructed a comparative branch that serves to bolster the network's precision in inferring cardiac structures, thereby refining our approach and elevating the fidelity of the generated 3D models. Our approach enables real-time, robust 3D heart modeling, independent of paired data requirements, thereby facilitating research advancements in echocardiographic digital twins.
[ "Echocardiography; 3D Cardiac Modeling; Weakly Supervised Learning; Point Cloud Generation; AI-assisted Echocardiographic Analysis" ]
https://openreview.net/pdf?id=C9DazhfVZR
https://openreview.net/forum?id=C9DazhfVZR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x4tRufpJkK", "m6K5HQMwnW", "fxIDWlOR1Z", "Y1QzXWRhVR", "PtKJCaopkD", "2yhzbFlMIp" ], "note_type": [ "official_review", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730630305605, 1733132202050, 1733150230493, 1730077051158, 1730714167271, 1730118004850 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10557/Reviewer_WVb4" ], [ "ICLR.cc/2025/Conference/Submission10557/Reviewer_gCnM" ], [ "ICLR.cc/2025/Conference/Submission10557/Authors" ], [ "ICLR.cc/2025/Conference/Submission10557/Reviewer_t8bS" ], [ "ICLR.cc/2025/Conference/Submission10557/Reviewer_2hLP" ], [ "ICLR.cc/2025/Conference/Submission10557/Reviewer_gCnM" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a weakly-supervised 3D generation network for echocardiographic sectional positioning and shape completion. The paper aims to address the limitations of traditional methods in providing explicit 3D modeling of heart structures from 2D echocardiography images. The proposed network uses point clouds to infer cardiac structures and can perform real-time inference without requiring significant paired training data. The authors also integrate a self-supervised learning branch into their framework, which enables multi-structure reconstruction loss and overall reconstruction loss for cardiac structure completion. Experimental results demonstrate superior performance on the test set, showcasing the potential of this approach in facilitating the reconstruction of heart digital twins form echocardiography.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses a significant problem in the field of echocardiography, namely the lack of efficient and accurate methods for inferring 3D heart structures from 2D ultrasound images. Current methods rely on manual segmentation or registration-based approaches that are time-consuming, labor-intensive, and often require extensive expertise.\", \"The proposed approach requires only echocardiography as the input modality, which is a widely used, low-cost, and non-radiative technique in clinical practice. This is a significant advantage because it eliminates the need for additional imaging modalities, making it more practical and accessible to a wider range of healthcare settings.\", \"The proposed weakly-supervised single-view 3D generation network and processing pipeline based on point clouds for echocardiography can address the limitations of traditional methods.\", \"The proposed network leverages the spatial perception capabilities of neural networks to infer 3D structures and directly obtain the relative 3D pose between 3D heart structures and 2D ultrasound slices.\", \"The approach enables real-time inference making it suitable for applications that require rapid responses.\"], \"weaknesses\": [\"The paper has a poorly written methods and results sections, with incomplete and ungrammatically structured sentences (e.g., 131-135). This makes it difficult for readers to understand the methodology and results.\", \"The presentation of the network's components is not logical or well-articulated, making it challenging to follow the authors' reasoning and design choices.\", \"The supplemental material does not provide a detailed description of the data processing pipeline as claimed, which raises concerns about reproducibility and the preparation of supervised training data.\", \"It is unclear how the network takes in echocardiograph images to estimate 3D heart shape, especially given that it requires contours of 2D echo planes (Figure 2).\", \"The paper claims early on that no paired data is needed, but it is not clear how this is enabled in the proposed method given the use of paired supervised data.\", \"The methods description does not clearly explain what \\\"weak supervision\\\" means or how it is applied, nor does it provide a clear understanding of the \\\"contrastive\\\" aspect of the formulation or learning process.\", \"There is no comparison with other relevant methods in the field, making it difficult to evaluate the proposed approach's performance and limitations.\", \"No statistical significance/equivalence tests are performed to support claims about reconstruction performance differences among different slices (Line 361).\", \"The authors rely heavily on Figure 2 to describe the method, but it is unclear how and why each subnetwork is structured in a particular way.\", \"Given the lack of clarity on data processing pipeline and network architecture, it would be challenging for readers to reproduce the results presented in the paper.\"], \"questions\": [\"See weaknesses.\", \"Please provide a more detailed description of the data processing pipeline used to prepare your supervised training data.\", \"Can you provide a comprehensive statistical analysis of your results, including p-values, confidence intervals, and effect sizes?\", \"How do you define \\\"weak supervision\\\" in this context, and how does it differ from traditional supervised learning approaches? Please provide specific examples or illustrations to clarify this concept.\", \"Can you elaborate on the contrastive learning aspect of your method? Specifically, what is being contrasted (e.g., positive vs negative pairs), and how do you define these pairings? Additionally, how sensitive is the training process to batch size settings?\", \"How does the network estimate the 3D heart model given an echocardiographic image rather than contours? Please provide a detailed explanation of the architectural design and any key components or features that enable this capability.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Respond to Rebuttal\", \"comment\": \"Response 2: The question is that echocardiography often has relatively large distortion with poor image quality, as the author said that 'CTA and ultrasound models have differing boundary definitions, leading to size over/underestimations'. Directly using the segmentation mask is easily affected by the domain gap in real scenarios (The author should address this problem, as I mentioned in weakness 2). More experiments are needed to verify the generalizability of this method.\\n\\nResponse 3 & 6: Since Figure 10 is the experiment? The author should point out where you add extra experiments in this paper first. Also, Figure 10 only shows that the normalized area of CTA and echocardiography have a similar change in cardiac structures during the heartbeat cycle, which can not illustrate that predicted 3D cardiac structures from echocardiography align with the real CTA (The chamber size, shape and morphology).\\n\\nResponse 1 & 5: No comments\", \"response_4\": \"No comments\\n\\nI would keep my rating because I think my questions have not been addressed yet, and the author should point out where they add experiments or modifications in the paper.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We are satisfied with obtaining many valuable opinions and withdraw the manuscript with gratitude.\"}", "{\"summary\": [\"This paper first analyzes the current state of 3D heart modelling and then proposes a method for 3D heart reconstruction based on single 2D echocardiography plane. This paper sets up its learning task by the following steps:\", \"Introducing a data-preprocessing pipeline, by converting the existing 3D CTA voxel data into 3D point-cloud data and simulate the echocardiographic planes\", \"Decoupling the reconstruction task into the coarse-shape, component and view reconstruction in a PCN for better learning objective.\", \"Introducing contrastive losses between coarse-shape and component reconstruction branch.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This work studies an interest problem that might have huge potential clinical impact.\\n\\nThe authors well delivers the motivation of the paper. \\n\\nThe authors demonstrate a novel approach to solving the reconstruction problem.\", \"weaknesses\": \"This paper lacks a summary of the recent relevant research. For example, the authors should include some paragraphs to summarize the point-cloud reconstruction works other than PCN.\\n\\nThe methodology and experiment parts are generally not well-written. For example, in Figure 1, what does XXX mean? Figure 2 lacks of well explanation of each components. In multi-branch network structure part (2.3.1), the paragraph multi structure branches should be introduced first. \\n\\nThe training data-preprocessing pipeline is heavily rely on the segmentation network. However, the author does not explain or discuss how the proposed mothed comparison between other data-preprocessing methods. \\n\\nThe authors claims that the network is light-weight, but in the experiment part, it is hard to see how this statement is hold, since there is no analysis or comparison to show the statement. \\n\\nThe works lacks comparison between other existing 2D-to-3D cloud-point reconstruction methods.\", \"questions\": \"Adding more background information and literature summarization on 2D-to-3D point cloud rescontruction to the paper.\\n\\nAdding comparison with modern network architectures, such as Transformer-based method or adding analysis to demonstrate why the method is lightweight. \\n\\nAdding comparison between other data-preprocessing, for example, add more views, since there is no upper bound in this work (a fully supervised method). \\n\\nIn section 2.3.2, why there is contrastive loss term for input $x$ and $x_{gt}$ and what is the difference between notation $X$ and $x$?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a 2D-to-3D inference approach to generate 3D heart geometry from 2D echocardiography. It uses multi-structure reconstruction loss, an overall reconstruction loss, and contrastive loss to enhance precision.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The problem has clinical significance.\\n2. Global heart structure and different views and parts are considered in the model design and evaluated separately. \\n3. The experiment includes quantitative and qualitative evaluation and ablation studies.\", \"weaknesses\": \"1. The method is not very clearly illustrated. Figure 2 is essential, but there is no caption for it. How are these decoders designed? What is the difference between the view completion and view reconstruction decoders? It seems that they can be merged into one decoder.\\n2. The paper proposed to solve the shape completion problem. But shape completion is a large collection of problems and need to be better defined in this paper. Shape completion seems to mean inferring 3D geometry from 2D planes, but what if some 2D views are missing? Does this model still work?\\n3. The figures need to be improved. For example, in Figure 5, curve plots (for showing the trends) are improper for comparing different heart components here. Bar plots should be better. \\n4. The evaluation metrics should have been better designed and illustrated. Are the geometric distances, e.g., the chamfer distances, calculated on normalized point clouds? However, the actual distances in mm are also desired. And how big is the heart, e.g., in mm? And how are the distances compared to the actual size of the heart? How applicable is this approach to real scenarios?\\n5. For localization, why the threshold is 2mm?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a framework for predicting 3D cardiac structural perception from 2D echocardiography planes. This is a new task in the echocardiographical domain that aims to explore the 3D cardiac structures instead of using CTA. All experiments are conducted in their in-house dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper proposes a new task in echocardiography, which has enabled 3D cardiac structural prediction via using 2D echocardiographic images.\\n\\n2. This paper demonstrates the good performance in predicting cardiac structures.\", \"weaknesses\": \"1. First, this paper can benefit from releasing the dataset since all the designs serve this dataset, which has a large amount of CTA data corresponding to cardiac structures. Such a dataset can help train a robust network for accurate cardiac structural prediction.\\n\\n2. A drawback of this task is using only the segmentation (cardiac structure contours) predicted by the 3D CNN network from CTA images for 3D shape prediction. We are not able to ensure the echocardiography scanned in the real scenarios can also make the closed shape compared with the prediction from CTA. For example, the CTA scans always have a fixed position for cardiac, and the imaging quality is much better than the echocardiography. In contrast, due to the different imaging principles, echocardiography often has relatively large distortion with poor image quality, and image acquisition highly depends on the sonographer's experience.\\n\\n3. With weakness 2, I consider that the experiment can be improved by using some image pair. The author can collect both CTA and echocardiography data from the same patient/person. Then, the experiment can utilize real echocardiography to predict the 3D cardiac structures. With this experiment, the author can also demonstrate that the proposed approach has overcome the following points: \\ni). What is the domain gap between CTA and echocardiography when applying this method? ii). Can this method actually be applied in echocardiography? iii). With the pair of CTA and echocardiography, the result could be more convincing.\\n\\n4. The network is designed only for this task; I don\\u2019t think this network has much innovation because all modules and designs are integrated with other methods. For example, global and local features, coarse to fine, some argumentations, etc.\", \"questions\": \"1. Will the author release the dataset? I think this work is 80% dependent on their dataset and thier task.\\n\\n2. Can an additional experiment be added to the rebuttal? It is really important to validate that these newly proposed tasks can benefit real medical applications and inspire follow-up works.\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"If the author releases the dataset, then an ethics review is required. However, this paper does not demonstrate whether this dataset will be publicly available or not.\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
C9BA0T3xhq
Optimizing Q-Learning Using Expectile Regression: A Dual Approach to Handle In-Sample and Out-of-Sample Data
[ "Caroline Chen", "Yuwei Fu" ]
Offline Reinforcement Learning (RL) presents unique challenges, primarily due to the constraint of learning from a static dataset without additional environmental interaction. Traditional methods often face limitations in effectively exploiting the available data, particularly when navigating the exploration-exploitation trade-off inherent in RL. This paper introduces a novel algorithm inspired by Implicit Q-Learning, designed to extend the utility of the Bellman update to actions not explicitly present in the dataset. Our approach, termed Extended Implicit Q-Learning (EIQL), strategically incorporates actions beyond the dataset constraints by allowing selection actions with maximum Q. By doing so, it leverages the maximization capability of the Bellman update, while simultaneously mitigating error extrapolation risks. We demonstrate the efficacy of EIQL through a series of experiments that show its improved performance over traditional offline RL algorithms, particularly in environments characterized by sparse rewards or those containing suboptimal and incomplete trajectories. Our results suggest that EIQL enhances the potential of offline RL by utilizing a broader action spectrum.
[ "reinforcement learning" ]
Reject
https://openreview.net/pdf?id=C9BA0T3xhq
https://openreview.net/forum?id=C9BA0T3xhq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "hdHIhJjYFR", "S1bCAHHfh7", "Oqz2BS2ctE", "IAfspWsW6P", "BRUJqldlQB", "AQ9XmKukS6" ], "note_type": [ "official_review", "official_review", "meta_review", "official_review", "decision", "official_review" ], "note_created": [ 1729737454276, 1730667831405, 1734628458846, 1730688487412, 1737524031557, 1730081507588 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10189/Reviewer_7kYU" ], [ "ICLR.cc/2025/Conference/Submission10189/Reviewer_X8fM" ], [ "ICLR.cc/2025/Conference/Submission10189/Area_Chair_EnPQ" ], [ "ICLR.cc/2025/Conference/Submission10189/Reviewer_JGTd" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10189/Reviewer_pBMH" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes an extension of the Implicit Q-Learning (IQL) algorithm by selectively using the maximization capability of Bellman updates controlled using a Bernoulli parameter. Empirical gains are demonstrated with respect to IQL on D4RL offline RL benchmark.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well structured. The proposed idea is simple and easy to implement.\\n2. Empirical results are provided on various robotics benchmarks, showcasing the effectiveness of the approach.\", \"weaknesses\": \"1. The paper could benefit from a small discussion on model-based approaches for offline RL, such as MOReL[1] and MOPO[2].\\nPenalizing OOD (Out-of-Distribution) actions is not the only way to handle action extrapolation in offline RL.\\n\\n2. The related work section is missing key references, particularly the omission of Sparse Q-learning (SQL) and Exponential Q-learning (EQL) [3], which is critical. These methods have outperformed IQL on suboptimal trajectories.\\n\\n3. The paper does not explain Implicit Q-Learning (IQL) in the preliminaries, making it difficult to follow Equation 1. It is important to describe the notations used with the equation to improve readability.\", \"minor\": \"Typo in quotes line 321\\n\\nNo hyperparameters have been provided in appendix making it challenging to reproduce the work\\n\\n[1] Kidambi, Rahul, et al. \\\"Morel: Model-based offline reinforcement learning.\\\" Advances in neural information processing systems 33 (2020): 21810-21823.\\n\\n[2] Yu, Tianhe, et al. \\\"Mopo: Model-based offline policy optimization.\\\" Advances in Neural Information Processing Systems 33 (2020): 14129-14142.\\n\\n[3] Xu, Haoran, et al. \\\"Offline rl with no ood actions: In-sample learning via implicit value regularization.\\\" arXiv preprint arXiv:2303.15810 (2023).\", \"questions\": \"Q1. Why is the Bernoulli parameter required during policy extraction in Eq 3? How is a selected in second part of the equation where B' is used?\\n\\nQ2. I am confused about Theorem 4.1. The notations seem inconsistent. In Eq1 B is used in theorem p is use. The proof states \\\"This proof highlights\\nthe influence of the parameter $\\\\beta$ in controlling the extent to which the new policy deviates from optimality\\\", However $\\\\beta$ as per my understanding denotes the behavior policy and is nota parameter?\\nIf the authors could provide clear explanation about each step and a notation table the contributions will be better understood.\\n\\nQ3 What was the choice of B for each environment during empirical evaluation? Why does EIQL have lower performance than some baselines in Table 1?\\n\\nKindly also refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the authors propose to modify the value loss in Implicit Q-Learning to stochastically tradeoff between using in-sample data to estimate the loss and using the learned policy to sample potentially unseen actions.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The problem being worked on is well-motivated and important. There are a lot of experiments with potentially significant results.\", \"weaknesses\": [\"I had a hard time understanding this paper. The writing is unclear in many parts and there are inconsistencies in the notation. I detail some of them below, but I think the amount of rewriting required to get this paper to an acceptable level is beyond what can be done in the rebuttal period.\", \"I understand that this work builds on prior works, however, every paper should stand on its own in some way. In this work, the authors build on Implicit Q-Learning, but there have been no introductions of IQL even just to set up the notation, the background and give some context. It is not necessary to give an in-depth discussion but if it is a central work that is being built upon, a short summary or setup would clarify what is being added.\", \"For example, $L_2^\\\\tau$ is not defined anywhere in the current work. Similarly, L164 mentions that the value function in (Kostrikov et al., 2021) is about to be defined but there is no context as to why, what it means, or how it relates to the current work. The few words in L171-173 only add to the confusion as it seems to assume that the reader is coming in with the discussion of (Kostrikov et al, 2021) fresh in their minds.\", \"The notation changes from L163 to L167-170 for the $\\\\tau$-th expectile of a random variable $X$.\", \"Some examples should be cited in L097 for the offline RL algorithms that are alluded to.\", \"It\\u2019s unclear what Section 4.3 is trying to show until the end of the section, this should be stated clearly at the beginning.\", \"3 seeds for the experimental results is too small. Even just comparing to IQL, there should be at least 10 per experiment.\", \"Figures 1 and 2 looks like each experiment was only done with one seed, which makes it hard to draw conclusions with confidence. I also don\\u2019t see how the conclusion in L358-359 was derived from Figure 2. All the methods look pretty much the same.\"], \"minor\": [\"The citations should be in parenthesis when they are not used as the subject of the sentence (e.g. in 042, it should be (Levin et al, 2020), similarly in L045, L048, etc.\", \"Eq (2), $V_\\\\psi$ has not rendered correctly\", \"The fonts and rendering of Figures 1 and 2 are too small to be readable on paper and become blurry when zoomed in\", \"These figures are labelled as \\u201canalysis\\u201d but I would call them experimental results.\"], \"questions\": [\"Is $\\\\tau\\u2019$ related to $\\\\tau$ or are they meant to be independent parameters?\", \"In L080, $p$ was used to denote the transition dynamics, is that still the case for the $p$ in L108-L114?\", \"What is $\\\\beta$ in Theorem 4.1?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Optimizing Q-Learning Using Expectile Regression: A Dual Approach to Handle In-Sample and Out-of-Sample Data\", \"summary\": \"This paper proposes an algorithm for offline RL, by suggesting an extension to an existing work (Implicit Q-Learning (Kostrikov et al. 2021)). The paper presents simulation experiment results performed on various mujoco tasks.\", \"comment\": \"This paper received four expert reviews, with scores 1, 1, 3, 3, and the average score is 2. All reviewers pointed out many issues, starting from the poor quality of presentation to the lack of algorithmic novelty and poor experiments. Based on the reviewers, this paper is currently below the acceptance quality for a top ML conference.\", \"additional_comments_on_reviewer_discussion\": \"The authors have not provided a response.\"}", "{\"summary\": \"This paper presents Extended Implicit Q-Learning (EIQL), a new approach to offline reinforcement learning that aims to extend Implicit Q-Learning by selecting actions beyond the dataset constraints to leverage the Bellman update's maximization capability. The proposed method intends to improve performance in environments with sparse rewards or incomplete trajectories by occasionally incorporating actions not seen in the dataset. Experimental evaluations are conducted on standard offline RL benchmarks to illustrate EIQL\\u2019s potential benefits over traditional algorithms.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"N/A\", \"weaknesses\": \"This paper exhibits a lack of rigor and completeness, with several critical issues that impact both readability and credibility. The related work section is notably underdeveloped, which severely hampers an understanding of how this work fits within the existing body of research and fails to establish a clear contribution. Additionally, the paper's notation is inconsistent and ambiguous, with numerous undefined terms and unclear derivations, leading to confusion in understanding the methodology. Key elements such as Theorem 4.1 are poorly presented, lacking in both formal results and coherence. Section 4.3 appears disconnected from the rest of the paper, as there is minimal context or explanation for its inclusion. Moreover, the paper is marred by formatting errors, including unformatted algorithm environments and incorrect citation formats, which contribute to an unprofessional presentation. In its current form, the paper does not meet the standards expected for an ICLR submission and would benefit greatly from a substantial revision before being reconsidered for publication.\", \"questions\": \".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes two approaches to extend Implicit Q-Learning: a) it modifies the value loss by incorporating a target of sampled action value function, and the policy objective by sampled advantage weighted regression.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"No\", \"weaknesses\": \"1. The theoretical analysis is misleading as a loss function cannot be regarded as a random variable.\\n2. The experiments exclude the SOTA offline RL algorithms\", \"questions\": \"1. Where is the definition of $L^\\\\tau_2$?\\n2. Where does the action in the second part of Equation 3 come from?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
C8niXBHjfO
Does Training with Synthetic Data Truly Protect Privacy?
[ "Yunpeng Zhao", "Jie Zhang" ]
As synthetic data becomes increasingly popular in machine learning tasks, numerous methods---without formal differential privacy guarantees---use synthetic data for training. These methods often claim, either explicitly or implicitly, to protect the privacy of the original training data. In this work, we explore four different training paradigms: coreset selection, dataset distillation, data-free knowledge distillation, and synthetic data generated from diffusion models. While all these methods utilize synthetic data for training, they lead to vastly different conclusions regarding privacy preservation. We caution that empirical approaches to preserving data privacy require careful and rigorous evaluation; otherwise, they risk providing a false sense of privacy.
[ "ML privacy", "membership inference" ]
Accept (Poster)
https://openreview.net/pdf?id=C8niXBHjfO
https://openreview.net/forum?id=C8niXBHjfO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xWKGlSrC3B", "rj71kvRyD6", "l5xrct26Ht", "kUGl5neFdZ", "jrgPPJLDQ4", "guOh64pW5C", "W2u0s0zs1g", "TY9OfcesFy", "QA0Q9iyb8b", "Q80Zycg8KC", "PA5atuCfU8", "K5ILLo8KYi", "G2UwvAlPjT", "EUUhd1qJfB", "DVcKsnzEhy", "D3oGf8y9GT", "AmcD33qJIA", "8BR3SJUAY2", "1X2GCjirDE", "0GPXqCHDB3" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732601169704, 1731950708444, 1730717451548, 1731944921476, 1731944587289, 1732526175922, 1732507973295, 1730045526606, 1731946671244, 1729884711172, 1732530717042, 1737523412228, 1732525384435, 1732546109915, 1734681094459, 1731944184530, 1732556427331, 1733145526271, 1730605574836, 1731950635146 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission722/Reviewer_YLFf" ], [ "ICLR.cc/2025/Conference/Submission722/Authors" ], [ "ICLR.cc/2025/Conference/Submission722/Reviewer_qB2Z" ], [ "ICLR.cc/2025/Conference/Submission722/Authors" ], [ "ICLR.cc/2025/Conference/Submission722/Authors" ], [ "ICLR.cc/2025/Conference/Submission722/Authors" ], [ "ICLR.cc/2025/Conference/Submission722/Reviewer_YLFf" ], [ "ICLR.cc/2025/Conference/Submission722/Reviewer_ybiF" ], [ "ICLR.cc/2025/Conference/Submission722/Authors" ], [ "ICLR.cc/2025/Conference/Submission722/Reviewer_YLFf" ], [ "ICLR.cc/2025/Conference/Submission722/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission722/Reviewer_qB2Z" ], [ "ICLR.cc/2025/Conference/Submission722/Reviewer_ybiF" ], [ "ICLR.cc/2025/Conference/Submission722/Area_Chair_yDs2" ], [ "ICLR.cc/2025/Conference/Submission722/Authors" ], [ "ICLR.cc/2025/Conference/Submission722/Reviewer_cuks" ], [ "ICLR.cc/2025/Conference/Submission722/Reviewer_qB2Z" ], [ "ICLR.cc/2025/Conference/Submission722/Reviewer_cuks" ], [ "ICLR.cc/2025/Conference/Submission722/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your reply, that has addressed most of my concerns. I have raised my score to 6.\"}", "{\"title\": \"Reply to Reviewer YLFf [2 / 2]\", \"comment\": \"> Q4: The dataset distillation method (random/OOD) shows utility drop compared to DP-SGD. Would increasing the number of synthetic data help improve utility?\\n\\nYes, increasing the number of synthetic data points could potentially improve utility, but this goes against the original motivation of DD. The goal of DD is to achieve similar performance to the full dataset while using significantly fewer data points (e.g., ipc=10 or 100).\\n\\nMoreover, when ipc = 1000, each experiment consumes approximately 563 GPU hours (nearly 100 times the runtime of DPSGD). Since increasing IPC further does not enhance privacy, it may not be worthwhile.\\n\\nAdditionally, related works rarely use OOD data for dataset distillation. We encourage researchers to explore this direction!\\n\\n\\n> Q5: More details about how the MIA metric is calculated.\\n\\nThank you for pointing this out. \\n\\nAs we have mentioned in the experimental setup, we randomly designate 500 data points as \\u201caudit samples\\u201d on which we evaluate membership inference, and we use mislabeled data as strong canaries to simulate worst-case data; the remaining 49,500 samples are always included in every model's training data. \\n\\nFor each method, we train 32 shadow models, ensuring that each audit sample is included in the training data of 16 models. Here, we used leave-one-out cross-validation for all 32 models\\u2014each time using one model as the victim and the remaining 31 models as the attacker's shadow models. Therefore, we can calculate the attack\\u2019s TPR and FPR over the 32\\u00d7500 guesses of the attacker on all canaries and victim models. We will update this to the experimental setup part in the revised manuscript.\\n\\n> Q6: Could provide an adaptive attack that achieves higher attacher performance based on Figure 7 findings where the LiRA achieves a lower MIA metric.\\n\\nWe really appreciate your suggestion on designing an adaptive attack. Actually, we put a lot of effort into this, but we find that it is really challenging to represent the \\u201cweird\\u201d DD transformation with a simple function. Unlike other common augmentations\\u2014such as cropping, rotation, or Gaussian blur, which are relatively straightforward to model\\u2014DD transformation is much harder to represent, making adaptive attacks more complex to implement.\"}", "{\"summary\": \"The paper studies the privacy-preserving properties of commonly-used synthetic image generation methods. The measuring of privacy-preservation is carried out using membership inference attacks using the attacks given by [Aerni et al. (2024)](https://arxiv.org/pdf/2404.17399). The paper compares the privacy-preservation of four commonly-used techniques: coreset selection, dataset distillation, data-free knowledge, model distillation, and synthetic data generated from diffusion models. The experiments are carried out on CIFAR-10 and the most vulnerable samples are mislabeled samples.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is very well written and the experiments are well-explained and seem solid.\", \"The paper can serve as a good reference for showing the strength of DP-SGD for obtaining good privacy-utility trad-eoff for synthetic image data.\"], \"weaknesses\": \"- The paper does not truly present many novel ideas, though it is another valuable demonstration about the effectiveness of DP-SGD for obtaining good privacy-utility tradeoff for ML models when the privacy protection is measured via the most vulnerable samples.\\n\\nRegarding the idea of auditing with worst-case samples, for example: it has been studied extensively and is already considered by Carlini et al, 2019, \\\"The secret sharer: Evaluating and testing unintended memorization in neural networks\\\" and Jagielski et al., 2020, \\\"Auditing differentially private machine learning: How private is private sgd?\\\" So, although auditing with worst-case samples seems to be a central theme in the paper, there is not much novelty in there (and I also think these references should be included).\\n\\nAlso, the vulnerability of synthetic data against membership inference attacks has been considered in the literature, and there also seem to be some central references missing: see e.g. the work by Hayes et al., 2019, \\\"LOGAN: Membership Inference Attacks Against Generative Models.\\\"\\n\\n- The experimental comparison is a bit restricted, after all, since only the CIFAR-10 dataset is considered. Perhaps one could consider datasets from other domains as well?\\n\\nDespite of its strengths, I am leaning towards reject as I think the paper does not provide sufficiently novel results to reach the bar for this venue. Nevertheless, I think that with some rewriting this will be a nice paper and can serve as a reference for this topic (privacy-protection of synthetic image data).\", \"questions\": [\"Do you think similar comparisons could be easily carried out in different image datasets or in other domains (e.g., tabular data) ?\", \"What is the setting behind Figure 4? There are no details given on that experiment.\", \"When focussing on practical scenarios: do you think the situation would differ, if instead of using synthetic samples (e.g., mislabeled samples) for the auditing, you would try to find the most vulnerable data samples in the dataset?\", \"Before the experimental setup is presented on page 6, there are some experimental results presented on page 5 (Figures 3 and 4) for which no sufficient details are given. I cannot be sure for which dataset is the result of Figure 4 is, and what is exactly the setting behind that figure.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer cuks\", \"comment\": \"Dear Reviewer cuks:\\n\\nThank you for your insightful comments on our paper. We sincerely appreciate your recognition of how our work advances prior analyses of empirical defenses using synthetic training data. We now address your concerns below:\\n\\n--- \\n\\n\\n> Q1: Comparison with DP-SGD baselines could be misleading as they do not satisfy differential privacy. Could add a baseline that satisfies DP and a non-private baseline.\\n\\nThanks for the valuable suggestions and we are sorry for anywhere that is unclear. \\n\\nIt is true that we give up meaningful provable privacy guarantees and view DP-SGD as a purely heuristic defense. This is exactly our point because we need to tune the hyper-parameters of these DP-SGD baselines for higher utilities to ensure fair comparisons with other empirical defenses. Results prove that none of the empirical defenses based on synthetic training data outperforms heuristic DP-SGD baselines in the context of privacy-utility-efficiency tradeoff under similar utility levels. We will improve the writing to clarify and avoid being misleading. \\n\\nFor the non-private baseline, it is reported in Figure 1 and Table 4 named \\u201cUndefended\\u201d. It represents a ResNet trained with a standard training routine using SGD. [email protected]%FPR of the non-private baseline is 100% and the test accuracy is 94.78%. More details can be found in Appendix A.1. We will also improve the writing to introduce it in the main text.\\n\\n> Q2: The paper would benefit from a discussion of how the formal guarantee of differential privacy differs from heuristic privacy - particularly, with respect to DP applying to adding or removing any possible record within the data domain; whereas, the MI attacks in this paper are only considered with respect to the training data.\\n\\nThank you for your valuable suggestions on this! We will incorporate more detailed discussions based on your input.\", \"one_important_point_to_clarify\": \"we aim to emphasize that DPSGD can serve as a very strong heuristic defense. Unlike canonical DPSGD, where random noise is added to make the privacy budget meaningful (e.g., setting $\\\\epsilon = 1$), our focus is on the fact that even with a privacy budget much larger than 1000, DPSGD remains one of the most effective defenses among the empirical methods we've examined.\\n\\n\\n> Q3: Why are the gridlines in Figure 1 not uniform? Consider changing this or explaining in figure caption.\\n\\nThank you for pointing out this! We will address this by updating Figure 1 in the revised manuscript to make gridlines uniform.\\n\\n> Q4: As with [0], are the DP-SGD models reported in Table 4 only non-private due to hyperparameter tuning? If so, what privacy parameters were these models trained with?\\n\\nThanks for the question. We indeed tune the hyper-parameters to achieve higher utility (for a fair comparison with other defenses) and forgo provable privacy to construct heuristic DP-SGD defenses. The privacy budgets are $\\\\epsilon \\\\approx 1.8\\\\times 10^8$ for high-utility DP-SGD and $\\\\epsilon \\\\approx 4.4\\\\times 10^7$ for medium-utility DP-SGD. Their hyperparameters are provided in Appendix A.6. We apologize for not mentioning the table of privacy parameters in the appendix within the main text. We will include this in the revised manuscript to be clearer.\\n\\n> Q5: Additional stylistic notes and suggestions.\\n\\nWe appreciate your careful attention to stylistic details and suggestions for improvement. For the typos and odd phrasings, we will correct them in the revised version. Regarding the intuition for why initialization on public data provides privacy guarantees, we will incorporate the suggested citations into our analysis and discussion to strengthen the argument.\"}", "{\"title\": \"Reply to Reviewer qB2Z [2 / 2]\", \"comment\": \"> Q5: Details of experiments in Figure 3 and Figure 4?\\n\\nThank you for bringing this issue to our attention! We will include these details in the appendix to ensure clarity and address this point thoroughly.\\n\\nFor Figure 3, the experiments were conducted on CIFAR-10, where we used 500 mislabeled samples to simulate the most vulnerable data. The attack settings follow those of LiRA. For each defense, we trained 16 models, ensuring that each sample only appears in the training set of half of the models. We used leave-one-out cross-validation\\u2014each time using one model as the victim and the remaining 15 models as the attacker's shadow models. The shadow models were implemented using a ConvNet architecture. We evaluated the privacy protection levels for both average-case and worst-case. \\n\\nFigure 4 is of the same experimental setting as the undefended baseline without any defense, but here we set the number of shadow models to 256 instead of 32. For the \\u201caverage-case\\u201d subfigure, we select a normal sample and plot its loss distributions respectively when it is a member and when it is not a member. For the \\u201cmost vulnerable\\u201d subfigure, we select a mislabeled canary and do the same thing.\\n\\n> Q6: When focussing on practical scenarios: do you think the situation would differ, if instead of using synthetic samples (e.g., mislabeled samples) for the auditing, you would try to find the most vulnerable data samples in the dataset? \\n\\nThis is a great question! \\n\\nIn a practical setting (specifically with natural data rather than strong canaries), the challenge lies in identifying the most vulnerable data in a given dataset without the need to train thousands of models. This is somewhat similar to [c]; definitely needs further investigation!\\n\\n\\nIndeed, we have ongoing work showing a very efficient method to identify the most vulnerable data. In this way, the situation would not differ much. While we cannot disclose further details about that work due to the double-blind review process, we can certainly add some discussions about this in the appendix at a later stage.\\n\\n[c] Privacy Auditing with One (1) Training Run. NeurIPS 2023.\"}", "{\"title\": \"reply\", \"comment\": \"Thank you for your reply!\\nSince privacy is not an average metric, demonstrating even a single instance where all these heuristic defenses fail to preserve privacy on this dataset is sufficient to highlight the need for more rigorous evaluations.\"}", "{\"comment\": \"Thank you for your detailed response.\\n\\nQ2. I am still somewhat confused about privacy leakage about the logits similarity between teacher's prediction on private image, teacher's prediction on synthetic image, and student's prediction on synthetic image.\\n\\nFigure 9 shows that the student's prediction on the private image is different than the other three and therefore it is not clear to me what there is a privacy concern for the private image regards to the student model.\\n\\nQ4 &Q5. thanks for the clarification.\\n\\nQ6. thanks for your efforts.\\n\\nFor Q1, along with Q2, Q3, Q6, given that there are already prior works showing that evaluations of privacy preserving machine learning methods should consider rigorous evaluation, and the dataset distillation method studied in this work, its privacy leakage is also studied in [2], I wonder if this work could provide some outlines and suggestions how to to design the rigorous evaluations for future research. \\n\\n[1] Aerni et al. Evaluations of Machine Learning Privacy Defenses are Misleading. CCS 2024.\\n\\n[2] Carlini et al. No free lunch in\\u201d privacy for free: How does dataset condensation help privacy. arxiv 2022\"}", "{\"summary\": \"This paper investigates whether using synthetic data in machine learning genuinely safeguards privacy, as often claimed. The evaluations is done on 4 different training paradigms-coreset selection, data distillation, data-free knowledge distillation and synthetic data generated from diffusion models. To test the privacy claims of these methods, the study uses membership inference attacks (MIAs), focusing on worst-case scenarios to rigorously assess privacy leakage. The paper also compares these methods to Differential Privacy Stochastic Gradient Descent (DPSGD), a technique known for providing formal privacy guarantees, and finds that DPSGD consistently outperforms synthetic data-based approaches in terms of the privacy-utility-efficiency balance. The findings reveal that none of the synthetic data techniques match DPSGD in safeguarding privacy effectively. Notably, the study also discovers that visual dissimilarity between synthetic and private data does not necessarily imply privacy protection, as even visually distinct synthetic data can leak information when model logits are similar. This highlights a risk that methods relying solely on visual or distributional differences may offer a false sense of privacy.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This broad approach offers a thorough understanding of various methodologies in synthetic data utilization and their impact on privacy.\\nThe study juxtaposes synthetic data-based techniques with Differential Privacy-SGD (DPSGD) as a baseline, which helps readers contextualize the efficacy of synthetic data methods in privacy preservation compared to a gold-standard approach like DPSGD.\\nThe study identifies instances where synthetic data, despite visual dissimilarity from private data, can still leak privacy information through logit similarities. This nuanced finding enhances the paper's depth by showing that visual similarity alone isn\\u2019t sufficient to evaluate privacy.\", \"weaknesses\": \"The experiments focus on CIFAR-10 and specific models, such as ResNet-18, which may limit the generalizability of findings. The paper\\u2019s findings could vary across more complex datasets or architectures, and broader experiments could better represent the implications for privacy in diverse real-world scenarios\\u200b.\\n\\nTechniques like DPSGD are noted for efficiency, yet they are resource-intensive. The paper briefly mentions but does not deeply engage with the practical constraints of computational cost and scalability, which are critical factors for real-world implementation of privacy-preserving methods\\u200b.\\n\\nWhile the empirical evaluation is thorough, the paper lacks an in-depth theoretical framework to explain why certain synthetic data techniques lead to privacy leakage. A theoretical grounding could bolster the empirical findings and offer predictive insights for synthetic data privacy.\", \"questions\": \"Why do you consider coreset selection as synthetic data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer ybiF\", \"comment\": \"Dear Reviewer ybiF,\\n\\nThank you so much for your thoughtful feedback and for taking the time to review our work\\u2014we truly appreciate it!\\n\\nWe understand your concern about our experiment being conducted on a single dataset and the lack of theoretical support (please refer to our detailed response below). That said, we\\u2019d like to emphasize that many studies in privacy evaluation often present misleading conclusions, and our experiment serves to highlight this critical issue. Through our work, we aim to encourage researchers to adopt more rigorous evaluation practices before claiming their methods are privacy-preserving\\u2014a step we believe is incredibly important!\\n\\nTackling systemic issues in research methodologies, especially in evaluations, is central to advancing the field. We respectfully ask you to reconsider the potential impact of our contribution, as it has the ability to positively shape future research directions.\\n\\nOnce again, thank you for your valuable feedback. We address your concerns in detail below.\\n\\n--- \\n\\n> Q1: Experiments on more datasets and more network architectures.\\n\\nThank you for suggesting reporting results on more datasets and network architectures. While we agree that using more datasets would nevertheless have been interesting, we decided against this:\\n\\n(1) To do fair comparisons, we needed all studied methods to achieve reasonable test performance. Some methods, such as Dataset Distillation, perform poorly on more complex datasets like CIFAR-100 or ImageNet, which would make comparisons less meaningful.\\n\\n(2) Similar datasets (e.g., CIFAR-100) Figure 1 alone requires more than 5800 GPU hours (without hyperparameters tuning and additional experiments). We decided that the auxiliary insights were not worth the cost.\\n\\n(3) Our approach aligns with previous studies like [1], which also conducted comprehensive evaluations of defense methods primarily on CIFAR-10. This consistency facilitates a direct comparison between our results and existing literature.\\n\\nAs our goal is to reveal the false sense of privacy protection of existing methods that train ML models with synthetic data, we believe that it is sufficient to focus on the most standard dataset used in empirical evaluations of privacy defense. \\n\\n> Q2: DP-SGD is resource-intensive. Discussion about computational cost and scalability in real-world scenarios is required.\\n\\nOur main argument is that DP-SGD can serve as a strong heuristic defense, providing both good utility and empirical privacy protection. In fact, when using a small noise level and setting the privacy budget $\\\\epsilon \\\\gg 1000$, DP-SGD can be more computationally efficient than many other defenses. \\n\\nThe key insight is that, by using small noise, we forgo DP-SGD\\u2019s provable privacy guarantee in exchange for significantly higher utility, while also improving efficiency. \\nFor training a single model with high-utility DP-SGD in our experiments, it only took **78** minutes. In this way, it is not particularly resource-intensive.\\n\\n\\n\\n\\n> Q3: An in-depth theoretical framework to explain why certain synthetic data techniques lead to privacy leakage.\\n\\nThank you for the valuable suggestion. We agree that a theoretical framework is indeed helpful and can offer predictive insights. However, since our focus is primarily on different types of **heuristic** defenses, which are highly heterogeneous, we believe it is both extremely difficult and not very meaningful to provide a generic theoretical framework for these defenses.\\n\\nAdditionally, none of these methods come with a theoretical foundation themselves, as they are heuristic defenses rather than theoretical defenses.\\n\\n\\n\\n> Q4: Why is the data from coreset selection considered as synthetic data?\\n\\nThank you for this question! Our initial idea is that, although coreset selection does not involve a data synthesis process, it shares a common goal with the other synthetic data methods: obtaining an informative proxy training set. The difference lies in the approach\\u2014data synthesis methods generate samples directly, while coreset selection automatically selects an informative subset from the original dataset. Therefore, we gave a footnote on page 1 to explain that, for simplicity, we use the term \\u201csynthetic data\\u201d (also for coreset) in the rest of the paper.\\n\\nMoreover, we would like to note that coreset selection just serves as a starting point\\u2014a very simple method\\u2014to demonstrate that average-case evaluations can be misleading. Additionally, through studying coreset selection, there is an interesting finding that the selection or unlearning of specific samples could introduce further privacy leakage\\u2014some selected samples exhibit a greater degree of privacy leakage compared to when they are part of the entire training set.\\n\\nWe will also consider moving this discussion to the appendix in the revised manuscript.\\n\\n\\n[1] \\\"Evaluations of Machine Learning Privacy Defenses are Misleading.\\\" CCS 2024.\"}", "{\"summary\": \"This paper provides a systematic analysis for privacy risk of the usage of synthetic data. The investigated methods include core set selection, dataset distillation, data-free distillation and diffusion models. The estimated privacy leakage is based on 500 canaries that are mislabeled and LiRA by shadow models. The results show that the privacy risk still remains and higher than DP-SGD at the same comparison of utility. This work also provides several detail analysis including the logits similarity.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"This work presents a systematic evaluation of privacy metrics TPR-FPR by LiRA for four synthetic data generation methods, and show that privacy leakage of synthetic data is much higher in than the results in previous literature.\", \"This work also provide some detailed analysis including considering visually similarity and logits similarity together instead of relying on the single MIA metric.\"], \"weaknesses\": [\"The analysis could be improved (see questions below).\", \"The technical contribution is somewhat limited. For example, the privacy estimation metric is based on previous work LiRA.\"], \"questions\": \"1. It is interesting to see the analysis for visual similarity and logit similarity. I did not quite follow the analysis for why the logit similarity of the synthetic data in DFKD leads to the higher MIA metric of private data. Figure 9 shows that the logit of the private data in student model prediction is much different than the remaining three, therefore I wonder if the authors could explain why this logits similarity of the synthetic data could lead to the higher MIA metric.\\n\\n2. Figure 7 shows that private data that has high confidence in original label and Figure 9 shows the private data that has higher confidence in the mislabel label. Would this show the different canary property for different kind of methods.\\n\\n3. The dataset distillation method (random/OOD) shows utility drop compared to DP-SGD. Would increasing the number of synthetic data help improve utility?\\n\\n4. It seems to me the TPR value granularity for 500 canaries is 0.2% (1/500) and I wonder how is the TPR reported in Tables are calculated. Or in other words, the authors may provide a detailed description about how MIA metric is calculated.\\n\\n5. I wonder if the author could provide an adaptive attack that achieves higher attacker performance based on Figure 7 findings where the LiRA achieves a lower MIA metric. \\n\\nI am happy to increase my score if I find my concerns addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"reply\", \"comment\": \"Thanks for your reply!\\n\\n> why there is a privacy concern for the private image regards to the student model.\\n\\nSimply put, when trained on data $x$, as long as the student model also has a relatively high probability on the canary label $y'$ (and the teacher model has a very high probability on this label, as shown in Fig 9.), this means the student model will have a very low loss, e.g., cross_entropy(f(x), y') is very low; when the teacher model is not trained on x, cross_entropy(f(x), y') is very high (due to model generalization, the model should predict it as true label $y$). This huge gap in losses provides a strong membership signal. \\n\\n> provide some outlines and suggestions how to to design the rigorous evaluations for future research.\\n\\nBefore claiming that an empirical defense really protects privacy, make sure to try an adaptive attack first. Look into the privacy leakage in the worst-case scenario and, if necessary, the average case as well. Then do a fair comparison with DPSGD. Usually, this gives a solid evaluation overall. \\n\\nAnother tip is to visualize the synthetic data, like we've shown in some dataset distillation methods. If the synthetic data looks just like the private data, then the defense is certainly not a good defense. However, visual dissimilarity does not ensure that privacy is preserved.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for the replies! I understand these compute cost considerations are valid points for not carrrying out experiments on large image datasets. However, unfortunately I also think that conclusions relaying on empirical results found using one dataset are not yet that definite, some other smaller datasets (perhaps from other domains) would strengthen the paper, as also pointed out by reviewer ybiF.\\n\\nIn addition to the paper by Hayes et al., 2019, there are also other papers on membership inference attacks on synthetic data, and speficially targeted on data and not the generative models, see, e.g., [Van Breugel et al., 2023](https://arxiv.org/pdf/2302.12580). It would bo good to mention some of these approaches.\\n\\nI think the paper is well written and nicely illustrate the need for DP in when fine-tuning generative models, however due to aforementioned reasons, I am keeping my score 5.\"}", "{\"title\": \"Thank you for your responses\", \"comment\": \"I have read your responses and have decided to increase my score.\"}", "{\"metareview\": \"The paper evaluates the empirical privacy protection provided by a number of synthetic data approaches without formal privacy in deep learning using membership inference attacks.\\n\\n**Strengths:**\\n* Clearly written paper\\n* Illustrates an important point in relative strength of empirical privacy protection from different approaches\\n\\n**Weaknesses:**\\n* Limited breadth of empirical evaluation\\n* Some reviewers considered the contribution to be limited\\n\\nIn extensive discussion between the reviewers, a consensus was reached to recommend acceptance.\\nThe paper provides compelling concrete evidence on the vulnerability of heuristic privacy protection from synthetic data approaches.\\nThe limited empirical evaluation was considered acceptable given the computational demands of the methods.\", \"additional_comments_on_reviewer_discussion\": \"This was a borderline paper that was discussed at an online meeting between the AC and most reviewers.\\nThe meeting reached a consensus recommending acceptance.\"}", "{\"title\": \"Reply to Reviewer qB2Z [1 / 2]\", \"comment\": \"Dear Reviewer qB2Z:\\n\\nWe really appreciate your acknowledgment of the solidness of our experiments and recognition of our work as a potentially valuable reference in the topic of privacy protection for synthetic data. In the remainder of this note, we have tried to address your comments and questions in as much detail as possible. Thank you once again for your kind consideration and time. We welcome any further questions or suggestions that could improve the clarity and impact of our paper.\\n\\n---\\n\\n> Q1: Novelty of our work. Privacy audit on worst-case samples has been extensively studied.\\n\\nThanks for providing the references and we will include a discussion of them in the related work section. \\n\\nWe would like to kindly clarify that we do **NOT** claim privacy auditing on worst-case data to be our main contribution. Our aim is to show that even with existing auditing methods, it becomes evident that there are numerous misleading claims regarding the use of synthetic data in training models. \\n\\nBreaking the four defenses we analyzed was not particularly challenging. This highlights that before researchers claim their empirical defenses to be privacy-preserving, they should take one additional step - conducting rigorous evaluations, as demonstrated in our work. Such evaluations are not difficult to perform. We hope this inspires future researchers to approach this issue more carefully, which is our primary goal.\\n\\nWe hope that this work can bring these important concerns to the forefront of the community's attention and foster further discussions on avoiding misleading results in privacy.\\n\\n> Q2: Vulnerability of synthetic data against membership inference attacks has been considered in the literature.\\n\\nThanks for the questions. We have reviewed the suggested work [1]. It primarily focuses on applying MIA to generative models, with the attack target being to determine whether a specific sample was used to train a GAN. This is quite different from our approach, where we use synthetic data generated from the original private dataset to train classification models. Our goal is to determine whether a classification model trained solely on synthetic data can protect privacy.\\n\\n> Q3: Results on more datasets could be provided, e.g., datasets from other domains.\\n\\nThank you for suggesting reporting results on more datasets. While we agree that using more datasets would nevertheless have been interesting, we decided against this:\\n\\n(1) To do fair comparisons, we needed all studied methods to achieve reasonable test performance. Some methods, such as Dataset Distillation, perform poorly on more complex datasets like CIFAR-100 or ImageNet, which would make comparisons less meaningful.\\n\\n(2) Similar datasets (e.g., CIFAR-100) Figure 1 alone requires over 5800 GPU hours (without hyperparameters tuning and additional experiments). We decided that the auxiliary insights were not worth the cost.\\n\\n(3) Our approach aligns with previous studies like [a], which also conducted comprehensive evaluations of defense methods primarily on CIFAR-10. This consistency facilitates a direct comparison between our results and existing literature.\\n\\n\\n[a] Evaluations of Machine Learning Privacy Defenses are Misleading. CCS 2024.\\n\\n\\n\\n> Q4: Do you think similar comparisons could be easily carried out in different image datasets or in other domains (e.g., tabular data) ?\\n\\nThat's a good point! It's worth mentioning TabDDPM[b] as an example. In this approach, the researchers first train a diffusion model using private training data, then generate synthetic tabular data, and finally train a high-performing classifier using only the synthetic data. While the authors claim this method protects privacy, it hasn't been evaluated under rigorous privacy testing protocols. \\n\\nI believe this approach requires a more thorough privacy evaluation to validate these claims. We could also consider adding this experiment to the appendix based on your suggestions!\\n\\n[b] TabDDPM: Modelling Tabular Data with Diffusion Models. ICML 2023.\"}", "{\"comment\": \"Thanks for addressing my concerns. Good paper. I'm keeping my score at 6.\", \"q1\": \"Regarding the non-private baseline, I missed that in Figure 1.\", \"q4\": \"Adding some references to the appendix will be helpful to the reader.\"}", "{\"comment\": \"Thanks for the reply. This is a good point actually, I agree with you. In this sense the paper does its job. Although more example and datasets would strengthen the paper. After reconsideration, I have decided to raise my score by 1.\", \"minor\": \"there is a dot missing on line 778.\"}", "{\"summary\": \"This paper measures empirical privacy for several methods of training vision models from synthetic data that claim to preserve privacy to some degree with respect to the training data: CoreSet selection, Dataset Distillation, Date-Free Knowledge Distillation, and synthetic data from Fine-Tuned Diffusion models. Empirical privacy is evaluated using the Likelihood Ratio Attack (LiRA) and the general setup follows [0]. As a private baseline, DP-SGD is used.\\n\\nFor the evaluation, a ResNet architecture is used with the CIFAR-10 dataset. For each of these methods, the authors measure privacy leakage (true positive rate given a fixed false positive rate), utility (test set accuracy), and efficiency (# of training hours). The main findings are that there's a clear privacy-utility tradeoff among methods once privacy is considered as a worst-case rather than an average case notion and that none of the synthetic data methods outperform DP-SGD with the three metrics jointly considered.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"This paper provides an apples-to-apples empirical privacy comparison for several methods of training vision models that claim to preserve privacy to some degree. In doing so, it improves upon prior empirical privacy analyses, some that were rather flawed.\", \"The presentation is clear and the text is well-written. The figures throughout are particularly helpful. See question 1 on this point.\"], \"weaknesses\": [\"The DP-SGD comparison is potentially misleading. The \\\"baseline\\\" method does not satisfy differential privacy. See question 2. The paper would benefit from including an additional baseline that satisfies DP e.g. [4] discusses how to incorporate hyperparameter tuning into the privacy analysis. It may also be interesting to include a fully non-private baseline using a standard training routine i.e. just train ResNet with SGD.\", \"The paper would benefit from a discussion of how the formal guarantee of differential privacy differs from heuristic privacy - particularly, with respect to DP applying to adding or removing any possible possible record within the data domain; whereas, the MI attacks in this paper are only considered with respect to the training data.\"], \"questions\": \"1. Why are the gridlines in Figure 1 not uniform? Consider changing this or explaining in figure caption.\\n2. As with [0], are the DP-SGD models reported in Table 4 only non-private due to hyperparameter tuning? If so, what privacy parameters were these models trained with?\", \"additional_stylistic_notes\": [\"Odd phrasing on Line 83: \\\"none of these fancy methods with synthetic data\\\"\", \"Typo on Line 95: \\\"it can provide a decent privacy protection\\\"\", \"Odd phrasing Line 244: \\\"but not evaluated in the right way\\\"\"], \"additional_suggestions\": \"- When discussing the intuition for why initialization on public data provides privacy guarantees, consider referencing [1, 2, 3], which illustrate how public data can be used to improve differentially private synthetic data. \\n\\n[0] Aerni, Michael et al. \\\"Evaluations of Machine Learning Privacy Defenses are Misleading.\\\" 2024.\\n\\n[1] Liu, Terrance, et al. \\\"Leveraging public data for practical private query release.\\\" International Conference on Machine Learning. PMLR, 2021.\\n\\n[2] Liu, Terrance, et al. \\\"Iterative methods for private synthetic data: Unifying framework and new methods.\\\" Advances in Neural Information Processing Systems 34 (2021): 690-702.\\n\\n[3] Fuentes, Miguel, et al. \\\"Joint Selection: Adaptively Incorporating Public Information for Private Synthetic Data.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2024.\\n\\n[4] Papernot, Nicolas, and Thomas Steinke. \\\"Hyperparameter tuning with renyi differential privacy.\\\" ICLR, 2022.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer YLFf [1 / 2]\", \"comment\": \"Dear Reviewer YLFf:\\n\\nWe sincerely appreciate your recognition of our systematic evaluation and find our analysis of visual similarity and logits similarity interesting. We now address your concerns below.\\n\\n---\\n\\n> Q1: The technical contribution can be somewhat limited.\\n\\nWe would like to kindly clarify that we are not claiming our contribution as proposing a new or stronger MIA method. Our aim is to show that even with existing MIA methods, it becomes evident that there are numerous misleading claims regarding the use of synthetic data in training models. \\n\\nBreaking the four defenses we analyzed was not particularly challenging. This highlights that before researchers claim their empirical defenses to be privacy-preserving, they should take one additional step - conducting rigorous evaluations, as demonstrated in our work. Such evaluations are not difficult to perform. We hope this inspires future researchers to approach this issue more carefully, which is our primary goal.\\n\\n> Q2: Why synthetic data with similar teacher logits as the original private data harms the privacy of private data and leads to higher MIA metrics.\\n\\nThat\\u2019s a great question! We\\u2019ll answer it from the following two perspectives:\\n\\n- Why previous studies suggested that DFKD can protect privacy: Intuitively, since the student model is never trained on private data, it is assumed that releasing the student model should safeguard the privacy of private data.\\n\\n- Why DFKD actually **fails** to protect privacy: For the most vulnerable data point in the dataset (denoted as X), the teacher model memorizes this sample strongly. During the distillation process, a synthetic data point X\\u2019 (which is **visually completely dissimilar** to X) may inadvertently trigger the teacher model's memory of X. In the MIA process, if the logits similarity between X and X\\u2019 is very high, it implies that during the MI attack, even though the student model never directly saw private data X, it still indirectly learned about X by seeing synthetic data X\\u2019. As a result, the membership privacy of X is completely leaked.\\n\\nTherefore, even if a model has never directly seen private data, it can still potentially leak privacy. We should not rely on data that is visually completely dissimilar to private data as a guarantee of privacy protection. More rigorous evaluations are essential.\\n\\n> Q3: Figure 7 shows that private data that has high confidence in original label and Figure 9 shows the private data that has higher confidence in the mislabel label. Would this show the different canary property for different kind of methods.\\n\\nThat's a great observation! Indeed, canaries should be carefully tailored to different defenses and datasets. For instance, mislabeled data would not be effective canaries for defenses based on self-supervised learning, which does not rely on labels.\\n\\nIn our work, we did not use the most optimized canaries for each defense we evaluated. This means that with better-designed canaries, it's likely that even higher MIA success rates could be achieved. Despite this, our approach of using mislabeled data as canaries already yielded strong results, clearly demonstrating that these defenses provide a false sense of privacy protection.\"}" ] }
C8jXEugWkq
EqNIO: Subequivariant Neural Inertial Odometry
[ "Royina Karegoudra Jayanth", "Yinshuang Xu", "Ziyun Wang", "Evangelos Chatzipantazis", "Kostas Daniilidis", "Daniel Gehrig" ]
Neural network-based odometry using accelerometer and gyroscope readings from a single IMU can achieve robust, and low-drift localization capabilities, through the use of _neural displacement priors (NDPs)_. These priors learn to produce denoised displacement measurements but need to ignore data variations due to specific IMU mount orientation and motion directions, hindering generalization. This work introduces EqNIO, which addresses this challenge with _canonical displacement priors_, i.e., priors that are invariant to the orientation of the gravity-aligned frame in which the IMU data is expressed. We train such priors on IMU measurements, that are mapped into a learnable canonical frame, which is uniquely defined via three axes: the first is gravity, making the frame gravity aligned, while the second and third are predicted from IMU data. The outputs (displacement and covariance) are mapped back to the original gravity-aligned frame. To maximize generalization, we find that these learnable frames must transform equivariantly with global gravity-preserving roto-reflections from the subgroup $O_g(3)\subset O(3)$, acting on the trajectory, rendering the NDP $O(3)$-_subequivariant_. We tailor specific linear, convolutional, and non-linear layers that commute with the actions of the group. Moreover, we introduce a bijective decomposition of angular rates into vectors that transform similarly to accelerations, allowing us to leverage both measurement types. Natively, angular rates would need to be inverted upon reflection, unlike acceleration, which hinders their joint processing. We highlight EqNIO's flexibility and generalization capabilities by applying it to both filter-based (TLIO), and end-to-end (RONIN) architectures, and outperforming existing methods that use _soft equivariance from auxiliary losses or data augmentation on various datasets. We believe this work paves the way for low-drift and generalizable neural inertial odometry on edge devices. The project details and code can be found at [https://github.com/RoyinaJayanth/EqNIO](https://github.com/RoyinaJayanth/EqNIO).
[ "equivariance", "inertial odometry", "subequivariance" ]
Accept (Poster)
https://openreview.net/pdf?id=C8jXEugWkq
https://openreview.net/forum?id=C8jXEugWkq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xhJ41AqT40", "x5bAp11mVW", "udOmuUznjW", "u9hW4vT0iJ", "u0LLj7yrol", "sOZloNwFgq", "rxxkpsPYnW", "pgsoIKHYYb", "fVykPcJYSe", "fHuXJgGP15", "byvVnmrmJL", "YzyWbdalTU", "Tgu2XdjBnT", "SIbxasY6iv", "ROECalCK5O", "NvYRzCUWbN", "NAvIJWxt9T", "MW9cWcYFa8", "Ff3WU4OYpV", "EMMN3lGEnx", "Dj8pgKVsm2", "BtIAZqk2JQ", "Aq8Ar4oK3m", "6MO75WS9nJ", "39saFWvlG3" ], "note_type": [ "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732406589215, 1730740572125, 1732405741774, 1737524080926, 1732565041174, 1732407038281, 1732407860113, 1732407713057, 1732930176263, 1730764304333, 1733021982982, 1732407902701, 1732407273797, 1731091968897, 1734677641331, 1732930195728, 1732408278974, 1732408155002, 1731170402224, 1732930140797, 1732406763031, 1732408477245, 1732406979090, 1732555390147, 1732406248444 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Reviewer_EZfo" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Reviewer_AwiV" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Reviewer_wVti" ], [ "ICLR.cc/2025/Conference/Submission10846/Area_Chair_rGgf" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Reviewer_cuLt" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ], [ "ICLR.cc/2025/Conference/Submission10846/Reviewer_cuLt" ], [ "ICLR.cc/2025/Conference/Submission10846/Authors" ] ], "structured_content_str": [ "{\"comment\": \"**W1. The primary weakness of this paper is the clarity of its writing. I\\u2019m unable to fully understand the major differences between this work and RIO: Rotation-equivariance Supervised Learning of Robust Inertial Odometry.**\\n\\n While our work and RIO both aim to address the equivariance of neural inertial odometry by modeling rigid transformations of trajectory and IMU data, there are several distinct differences.\\n\\n(1) Most significantly, RIO addresses **approximate** equivariance, while our method enforces **strict equivariance**. Approximate equivariance enforces equivariance via an equivariance loss which penalizes inconsistencies in predictions on data from four rotated trajectories. By contrast, **strict** equivariance enforces **inherent** consistency by tailoring specific neural network components to be **exactly** equivariant. This work proposes several specialized linear and non-linear layers to guarantee the strict equivariance of the predicted canonical frame, while RIO uses traditional layers. \\n\\n(2) RIO adopts a Test-Time Training strategy, which adds computational overhead and thus increases the inference time as they are required to store multiple versions of the prediction model and augmented trajectories. Since our method is strictly equivariant, we do not require a Test-Time Training strategy. Moreover, despite not using such a strategy our method outperforms RIO.\\n\\n(3) RIO only handles $2D$ displacement outputs and equivariance to rotations in SO(2), i.e. $2D$ planar rotations. By contrast, our method can handle both $2D$ and $3D$ outputs and thus addresses equivariance in $2D$ and subequivariance in $3D$. \\n\\n(4) Last, we also extend the modeling to roto-reflections, i.e. the group $O(2)$, which comprises rotations and reflections in the plane perpendicular to gravity and requires a novel bijection for angular rate preprocessing.\\n\\nTo highlight the difference in equivariance types (strict/approximate), we included a visualization of the different strategies in the revision Appendix A.4.\\n\\n**W2. While the key idea of this paper is clear, it\\u2019s difficult to discern how it specifically diverges from the previous work. I strongly recommend that the authors begin by clearly outlining the main concepts, followed by a detailed description of the methodology. This structure would greatly help readers in understanding the unique contributions of this work.**\\n\\nAs discussed previously, our work significantly diverges from the previous works mentioned above (RIO) in four major ways (i) approximate vs. strict equivariance, (ii) test-time training vs. no test-time training, and (iii) $2D$ vs. $3D$ modeling and (iv) equivariance to rotations in $SO(2)$ vs. roto-reflections $O(2)$. Please see our previous response for a recap.\\n\\nWe also thank the reviewer for the recommendation of reworking our method, and have tried our best to include the preliminary theory that is necessary to understand the concepts in this work in Appendix A.4. We supplemented this with additional pointers to textbooks and prior works. Regretfully, we believe, due to space considerations, that it is out of scope for us to include more preliminary theory in the main text. \\n\\n**Q1. What is the roto-reflection group, and why is it important? A more detailed explanation of this concept and its relevance would be helpful.**\\n\\nThe Roto-reflection group $O(n)$ is a set of $n\\\\times n$ matrices $R$ (in this work $n=2$), which are orthogonal, i.e. $RR^T = I_n$. Their determinant can be 1 or -1. This set of matrices forms a group because (i) it has an identity element $I_n$, (ii) the product of any two matrices from $O(n)$ is in $O(n)$, and (iii) every matrix $R\\\\in O(n)$ has an inverse $R^T\\\\in O(n)$. As shown in Figure 3a, the orange trajectory is obtained by reflecting the reference trajectory, and the purple one is obtained by rotating the reference trajectory. The reflections and rotations are the transformations in the roto-reflection group. We illustrate how this transformation acts on the inputs of the neural network in Figure 3b.\"}", "{\"summary\": \"This paper presents a method to enhance inertial odometry by applying group equivariance to canonicalize IMU data and targeting yaw ambiguity in gravity-aligned frames through a subequivariant framework.\\n\\nThe authors design a neural network architecture that maintains equivariance under roto-reflections around the gravity axis, allowing integration with existing systems like RONIN and TLIO. By predicting canonical yaw frames and equivariant covariance matrices, EqNIO improves generalization across diverse motion patterns and reduces drift caused by sensor noise and biases. Experiments on publicly available datasets demonstrate that this method achieves reductions in Mean Squared Error and Absolute Translation Error compared to baseline models, while also exhibiting faster convergence and maintaining computational efficiency suitable for deployment on edge devices.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper introduces an approach by applying strict subequivariance in neural inertial odometry, addressing yaw ambiguity and gravity alignment limitations directly within the network architecture, an aspect that prior methods often handled indirectly. Its originality lies in developing a canonicalization scheme using the roto-reflection group to simplify IMU data processing. The authors integrate and extend existing methods by adapting the framework to both end-to-end and filter-based neural inertial odometry systems, which demonstrates its flexibility and scalability.\\n\\nIn terms of quality, the paper provides detailed, reproducible implementation notes and thorough ablation studies in the appendix that clarify design decisions and evaluate parameterization choices. It emphasizes empirical rigor by testing the model across multiple datasets with varied sensor placements and motion patterns, supporting claims of robustness and broad applicability. Clarity is maintained through structured explanations of complex mathematical formulations, specifically around group theory and its relevance to sensor data processing, while the significant computational efficiency results underscore the practical utility for edge-device applications. The combination of technical insights and comprehensive empirical validation underlines the paper's contribution to advancing neural inertial odometry, particularly in settings with challenging orientations and device constraints.\", \"weaknesses\": \"The abstract describes EqNIO as leveraging \\\"canonical displacement priors\\\" to generalize across arbitrary IMU orientations, but it lacks a clear technical explanation of how these priors work in practice. Generalization is claimed to stem from \\\"canonical gravity-aligned frames\\\" and \\\"equivariant yaw frames,\\\" but the abstract could benefit from a more precise explanation of these transformations and their operationalization in the model.\\n\\nThe learnable yaw orientation in canonical frames is a promising feature but lacks clarity on how it resolves yaw drift or improves orientation estimation, given that yaw is typically the most challenging to estimate accurately in inertial odometry due to the absence of an absolute reference.\\n\\nThe introduction highlights EqNIO\\u2019s generalization and robustness but does not discuss potential limitations or scenarios where the approach may struggle (e.g., handling different sampling rates, extreme motions where IMU biases may not be fully mitigated, or contexts with poor gravity alignment).\\n\\nWhile EqNIO is compared to existing neural odometry methods like TLIO and RONIN, the introduction does not delve into specific weaknesses in these prior approaches and how EqNIO addresses these limitations.\\n\\nThe paper covers a broad range of related works but may omit some recent or seminal papers in the domain of learning-based inertial odometry and equivariant neural networks. Ensure a comprehensive literature review by including all relevant and recent works that contribute to the field. This includes verifying that seminal papers and the latest advancements are adequately cited to position EqNIO within the current research landscape. Due to the inherent relationship between odometry and inertial attitude estimation, as well as the similar methods applied to both, I highly encourage you to explore these areas further, including learning-based approaches to inertial attitude estimation.\\n\\nThe descriptions of related methods (e.g., TLIO, RONIN) are somewhat high-level and lack technical depth. Providing only superficial descriptions may not adequately highlight the nuances that differentiate EqNIO from these methods.\", \"questions\": \"Could you provide a technical explanation of how the canonical displacement priors are implemented in practice?\\nHow exactly do the gravity-aligned frames and equivariant yaw frames work in your model architecture?\\nWhat specific mechanisms in your learnable yaw orientation approach help address the yaw drift problem?\\nCould you provide experimental evidence demonstrating how your method improves yaw estimation compared to existing approaches?\\nHow does your model perform under varying IMU sampling rates?\\nWhat are the performance characteristics under extreme motion scenarios where IMU biases may be significant?\\nHow does the system behave in situations with poor gravity alignment?\\nHow does your work relate to recent developments in learning-based inertial attitude estimation?\\nCould you elaborate on the connections between EqNIO and current research in equivariant neural networks specifically applied to inertial navigation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**W1. The general canonicalization scheme has been proposed before by Kaba, S\\u00e9kou-Oumar, et al. \\\"Equivariance with learned canonicalization functions.\\\" ICML 2023, and cannot not be presented as a contribution. The original work must be cited.**\\n\\nThanks for pointing out the original work, which we cited in the revision (Section 2). As correctly pointed out, we achieve equivariance via the canonicalization scheme presented in Kaba et al. We want to highlight that our novelty is in characterizing and addressing the subequivariant structure in neural inertial odometry where the learned network predicts the displacement priors. To this end, we design a subequivariant framework by **applying** canonicalization and designing specific basic layers.\", \"we_made_the_following_modifications_in_the_revision\": \"(i) cite the work of Kaba et al. in the related work section, and add a separate related work section on canonicalization in Appendix A.1 due to space constraints, including citations of previous and subsequent papers on canonicalization. and (ii) modify our introduction to make clear that our novelty is in **applying** canonicalization to IMU data, and that, to achieve this, we tailor specific basic equivariant layers and a bijective map.\\n\\n**W2. While there is a reduction in drift compared to the baselines, the remaining drift is still significant (>2m) which suggests that the main problem in IO is not in exact equivariance but elsewhere (most likely sensor noise).**\\n\\nThe lack of principled subequivariance modeling in previous approaches accounts for a significant increase in drift, and we address this with the presented work. In fact, by correctly accounting for this geometric constraint, EqNIO reduces the MSE* by 57%, the ATE* by 12%, and the RTE* by 11% on the Aria Dataset (see Tab. 1).\\n\\nAs rightly pointed out, there still exists drift compared to VIO, and this highlights the fact that pure IO is a historically hard, but equally exciting problem to study. Incorrect biases in addition to sensor noise are two known drivers of drift in learning-based IO. To address this, Brossard et al., 2020a, Brossard et al., 2020b and Buchanan et al., 2023 train models to predict either IMU biases or debiased IMU data directly, which reduces drift by a significant margin. We believe that our framework can readily incorporate such methods to replace the current way of debiasing IMU measurements, which relies on factory-calibrated bias values. \\n\\n**Q1. Why is this canonical. equiv. scheme chosen over other equiv. choices? e.g. frame averaging (Puny et al. ICLR '22) also allows adapting existing non-equiv. Architectures.**\\n\\nWe thank the reviewer for this suggestion and present results for an additional baseline following the frame-averaging technique in Puny et al. ICLR \\u201822 applied to our O(2) model. We first perform PCA (using the torch.pca_low_rank() implementation) on the IMU data, which results in a set of four frames, corresponding to the four solutions of PCA. We then use the equivariant format in Eqn.5 of Puny et al. ICLR \\u201822 to average the projected predictions. We show the results below, marked as TLIO + frame averaging, and include them in Appendix A.16 of the revision:\\n\\n| Model | TLIO Dataset MSE* | TLIO Dataset ATE* | TLIO Dataset RTE* | Aria Dataset MSE* | Aria Dataset ATE* | Aria Dataset RTE* |\\n|------------------------|--------------------|--------------------|--------------------|-------------------|-------------------|-------------------|\\n| TLIO | 0.0333 | 3.0786 | 0.5418 | 0.1525 | 4.5599 | 0.9771 |\\n| + rot. aug | 0.0324 | 3.7219 | 0.5513 | 0.0532 | 2.1027 | 0.5208 |\\n| + SO(2) Eq. Frame | 0.0319 | 2.4009 | 0.5006 | 0.0246 | 1.8639 | 0.4836 |\\n| + O(2) Eq. Frame | 0.0298 | 2.4056 | 0.4775 | 0.0230 | 1.8491 | 0.4649 |\\n| + frame averaging | 0.0321 | 3.0566 | 0.5358 | 0.0582 | 4.5535 | 0.9922 |\\n\\nWe see that while TLIO + frame averaging achieves a low error on the TLIO dataset it fails to generalize to the Aria Dataset. Furthermore, our equivariant method (TLIO + O(2) Eq. Frame) outperforms it on both datasets and across metrics. We believe that the subpar performance of TLIO + frame averaging stems from the noise sensitivity of PCA. Due to this sensitivity, it likely overfits to the specific noise level present in the TLIO dataset, which does not match the one in the Aria dataset. Lastly, we want to note that the frame averaging technique significantly increases the inference time by a factor of four, as the model needs to be inferred several times, once for every constructed frame.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you very much for your valuable feedback and review.\"}", "{\"comment\": \"**Q1. I wonder for sensor fusion in the form of visual-inertial odometry, it could be helpful to have a good uncertainty estimate from the inertial odometry module. Uncertainty estimates could consider the modeling errors of neural networks, and propagated to the final module. Here, priors can also be defined using the physical properties of the system. Would it be a consideration to use Bayesian modelling tools?**\\n\\nThanks for providing the new direction and potential fusion of our model, these are indeed excellent questions and directions for research. \\nFirst, we think integrating our work into visual odometry would be a direct future work following the procedure of (Chen et al., 2021a), which uses the provided displacement and uncertainties in a VIO backend. \\nSecond, our current EKF is already an instance of Bayesian filtering with a Gaussian noise assumption, where model prediction is the displacement prior, and the measurement step maximizes the posterior. Nonetheless, we believe that using additional Bayesian modeling of the displacement noise, leveraging specific physical properties of the system, such as moment of inertia, or sensor temperature, would greatly benefit our method. Leveraging more sophisticated filters like the Unscented Kalman Filter or Particle Filter would additionally benefit our method. Finally, we believe that future directions should incorporate not only aleatoric uncertainties (about data), which we used in the current work but also epistemic uncertainties that arise from the model.\"}", "{\"comment\": \"**W5. The paper covers a broad range of related works but may omit some recent or seminal papers in the domain of learning-based inertial odometry and equivariant neural networks. Ensure a comprehensive literature review by including all relevant and recent works that contribute to the field. This includes verifying that seminal papers and the latest advancements are adequately cited to position EqNIO within the current research landscape. Due to the inherent relationship between odometry and inertial attitude estimation, as well as the similar methods applied to both, I highly encourage you to explore these areas further, including learning-based approaches to inertial attitude estimation.**\\n\\nWe thank the reviewer for the valuable feedback. In the revision Section 2 and Appendix A.1, we supplement our related work section with the following works in equivariant odometry without learning and inertial attitude estimation. We further invite the reviewer to suggest more.\\n\\nWe added a series of works [1,3,4,5] about embedding equivariance or symmetry into (visual) inertial odometry, but these methods are not learning-based. As far as we know, we are the first to bake the equivariance into the neural network for the inertial odometry problem. Integrating our equivariant network into these models would be a potential future work. They work on equivariant dynamics and use the lift function to map the state and the extended velocity to the Lie algebra associated with the system's symmetry group, enabling the construction of a lifted system on the Lie group. The error dynamics are linearized around a fixed origin, which makes it independent of the current state. \\n\\nWe added recent related works [2,6,7,8,9] on inertial odometry and inertial attitude estimation. In particular, inertial attitude estimation works do not address equivariance, and extending their work in this way is a potential avenue for future research. A particular difference to our setting is that they do not require gravity alignment, and thus the formulation would deviate slightly from the one in this work. The work in [7] extends the EKF filter to be a learnable component, which is an orthogonal research direction to ours, and can further enhance the performance of our method.\\nWe added the initial works on EKFs for inertial attitude estimation [10,11,12] in Appendix A.1 and also the most recent learning-based methods for LiDAR Odometry [13], VIO [14] and SLAM [15].\\n\\n**Please refer to the following box for the references**\"}", "{\"comment\": \"**W3. The introduction highlights EqNIO\\u2019s generalization and robustness but does not discuss potential limitations or scenarios where the approach may struggle (e.g., handling different sampling rates, extreme motions where IMU biases may not be fully mitigated, or contexts with poor gravity alignment).**\", \"we_highlight_the_sensitivity_of_our_method_with_respect_to_the_following_parameters\": \"Poor gravity alignment, a study which we already report in Fig. 7 of the original version, has been moved to Appendix A.13 Fig. 15 for consistency and space constraints; and, following the suggestions, we provide the following additional sensitivity studies on the sampling rate, and IMU bias accuracy in the Appendices A.14 and A.15.\\n\\nTo study IMU sampling rate sensitivity, we deploy our pre-trained model on IMU data that is resampled from a rate of r $\\\\in$ \\\\{50, 100, 200, 250, 500, 1000\\\\} to 200 Hz, since TLIO requires a fixed input size of 200 IMU measurements for 1-second. This means that for r < 200 IMU data is interpolated. We report the results without the EKF in the loop below:\\n\\n\\n| Sampling Rate | Aria MSE* | Aria ATE* | Aria RTE* |\\n|---------------|------------|------------|------------|\\n| 1000 | 0.023242 | 1.876466 | 0.470085 |\\n| 500 | 0.023164 | 1.878051 | 0.469828 |\\n| 250 | 0.022908 | 1.882291 | 0.468969 |\\n| 200 | 0.022763 | 1.881573 | 0.468524 |\\n| 100 | 0.021909 | 1.931893 | 0.470235 |\\n| 50 | 0.022564 | 2.174362 | 0.500758 |\\n\\nThis shows the relative stability of our method for rates equal to and above 200 Hz, with a maximal ATE increase of 0.2%. By contrast, going below 200 Hz leads to a higher increase of 15.8% (for 50 Hz). However, such low sampling rates are unlikely in real-world scenarios, since most commodity IMUs provide kilohertz-level sampling rates. \\n\\nNext, we study the sensitivity of our network to inaccurate bias estimation. For this, we monitor the MSE after perturbation of the biases used to de-bias the input IMU data to our network, as done by Liu et al. 2020. We sample uniform noise from the range $\\\\nu\\\\sim U[-r, r]$ where $r$ is defined in the tables below.\\n\\n| Gyro Bias Range | MSE* | \\n|-----------------|--------|\\n| 0.000 | 0.02981 |\\n| 0.025 | 0.02981 |\\n| 0.050 | 0.02981 |\\n| 0.075 | 0.02981 |\\n| 0.100 | 0.02981 |\\n\\n\\n| Accel Bias Range | MSE* |\\n|------------------|--------|\\n| 0.0 | 0.02981 |\\n| 0.1 | 0.02984 | \\n| 0.2 | 0.03001 | \\n| 0.3 | 0.03031 | \\n| 0.4 | 0.03066 | \\n| 0.5 | 0.03110 | \\n\\nOur results are in accordance with the sensitivity study conducted in TLIO (Liu et al. (2020)).\\n\\nWe include a section discussing these sensitivity studies in Appendices A.12, A.13, A.14, and A.14 of the revision.\\n\\n**W4. While EqNIO is compared to existing neural odometry methods like TLIO and RONIN, the introduction does not delve into specific weaknesses in these prior approaches and how EqNIO addresses these limitations.**\", \"our_work_addresses_the_following_three_weaknesses_of_prior_work\": \"First, prior networks are only trained to be consistent with limited types of IMU data transformations: Liu et al. 2020 (TLIO), Herath et al. 2020 (RONIN), and Cao et al. 2022 (RIO) are only trained to be consistent when applying **rotations** around the gravity axis. In our work, we train our model to be consistent when applying **roto-reflections**, and this requires the development of a novel bijection for preprocessing angular rates. We see that modeling this additional transformation improves results consistently. Secondly, RIO and RONIN only target trajectory tracking in 2D, while our method can flexibly track in 2D or 3D. This is because the network outputs of RIO and RONIN are only 2D velocities in the xy-plane. Our method can handle the equivariance transformations of 3D displacements and 3x3 covariances. Finally, TLIO, RONIN, and RIO all employ approximate equivariance to ensure consistency under the above transformations. This means that they either use data augmentation, auxiliary equivariant consistency losses or Test-Time-Training to minimize the inconsistency between network outputs from data under different rotations around gravity. Our method guarantees this inconsistency to be 0, throughout training and testing, by employing equivariant neural network layers that produce consistent outputs **by design**. We show in Tabs. 1 and 2, the resulting networks consistently outperform all previous works by large margins.\"}", "{\"comment\": \"Respected Reviewer, we greatly appreciate your feedback and comments and have addressed them in our preceding responses. We look forward to your feedback.\"}", "{\"summary\": \"This paper presents a new method for inertial odometry, which predicts the poses given IMU measurements. The method is called EqNIO, which brings the idea of the so-called canonical displacement priors. How it works, is (1) that IMU measurements are mapped into a gravity aligned, canonical frame with neural networks, and (2) mapping the outputs back to the original frame. Several contributions are presented. A canonicalization scheme is presented, that maps IMU measurements into a canonical orientation. A processing step is devised, which map both accelerometer and gyro readings into a space where gravity direction is preserved. Finally, a neural network designed is presented to perform regression tasks. Several experiments are presented, demonstrating advancements to the state of the art and ablation studies that motivate the overall approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Pros:\", \"The paper is written well with clear figures, and many intuitive explanations of complex concepts.\", \"The paper presents several technical contributions, which overall leads to more generalizable framework for inertial odometry. Mixing the physical properties with learning based regression module makes sense, which can boost generalization performance.\", \"Experimental results are presented to a large extent, demonstrating advancements to the state of the art. Ablation studies presented are useful to better comprehend the research done.\"], \"weaknesses\": [\"Cons:\", \"It is not clear if ICLR is the best venue for such research, since learning components here is rather limited to a regression module.\", \"Uncertainty modelling assumes diagonal covariance. Validity of these assumptions are tested by looking at the final performance that it helps. Perhaps an in-depth analysis on this step could help, despite not the core focus of the paper. For example, there has been many evaluation tools from uncertainty quantification literature, and can be presented here.\"], \"questions\": \"I wonder for sensor fusion in the form of visual-inertial odometry, it could be helpful to have a good uncertainty estimates from the inertial odometry module. Uncertainty estimates could consider the modelling errors of neural networks, and propagated to the final module. Here, priors can also be defined using physical properties of the system. Would it be a consideration to use Bayesian modelling tools?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for taking the time to consider and review our rebuttal. We would be happy to address any other questions you may have about improving our work further and provide any necessary clarifications that would result in an increase in your rating.\"}", "{\"comment\": \"**W5 references**\\n\\n[1] Equivariant IMU Preintegration with Biases: an Inhomogeneous Galilean Group Approach \\n Giulio Delama, Alessandro Fornasier, Robert Mahony, Stephan Weiss\", \"https\": \"//arxiv.org/abs/2309.13814\"}", "{\"comment\": \"**W1. The abstract describes EqNIO as leveraging \\\"canonical displacement priors\\\" to generalize across arbitrary IMU orientations, but it lacks a clear technical explanation of how these priors work in practice. Generalization is claimed to stem from \\\"canonical gravity-aligned frames\\\" and \\\"equivariant yaw frames,\\\" but the abstract could benefit from a more precise explanation of these transformations and their operationalization in the model.**\\n\\nThanks for the valuable suggestion. What we and previous work refer to as \\u201cneural displacement priors\\u201d are neural networks that take IMU measurements as input and generate displacement and covariance predictions as outputs, i.e. a prior probability distribution on the expected displacement, given a set of measurements. These priors can be incorporated into probabilistic filters to estimate the state of the IMU over time. When referring to \\u201ccanonical priors\\u201d, we mean priors (i.e. predicted displacement and covariances) expressed in a canonical frame. We uniquely define the canonical frame via three axes: the first is the gravity axis (making the frame gravity aligned), and the second and third are predicted by an \\u201cequivariant frame model\\u201d that takes IMU data as input. Note that this frame is not necessarily right-handed, and quantities expressed in this frame can, therefore, potentially undergo a reflection and rotation. This model is termed \\u201cequivariant\\u201d since it commutes with rotation around or reflection across the gravity axis (O(2) roto-reflection). This means that if the input is O(2) roto-reflected by an arbitrary transformation, the resulting output is roto-reflected by the same transformation.\\n\\nWe reworked the abstract to lay more emphasis on the explanations above and provide further details on the **operationalization** of the canonical frames in Q1 below.\\n\\n**W2. The learnable yaw orientation in canonical frames is a promising feature but lacks clarity on how it resolves yaw drift or improves orientation estimation, given that yaw is typically the most challenging to estimate accurately in inertial odometry due to the absence of an absolute reference.**\\n\\nWe believe that the term \\u201clearnable yaw\\u201d has generated some confusion. The canonical frame is specified by gravity and two learnable orthogonal vectors perpendicular to gravity. It does not define a yaw measurement and is thus not incorporated directly into the filter. Instead, this frame is used to canonicalize the input, generate the displacement, and then transform it back into the original frame. As a result, our network generates more consistent outputs (see Fig. 1) when data is observed under different O(2) roto-reflections within the plane perpendicular to gravity. Since the network does not need to learn to generate consistent results across O(2) roto-reflections, it can overall generate better displacement and covariance estimates and, thus, improve the trajectory tracking in the EKF (when the frame is applied to TLIO). Since the measurement equation of the EKF depends on the IMU yaw orientation (see Eq. 14 in Liu et al. 2020, and Appendix A.6.4), better displacement measurements should result in better IMU yaw estimation and thus lower yaw drift. However, experimentally we observed this effect is very small, showing only a small improvement on the Aria Dataset (Tab. 3 see 2.073 deg average yaw error (AYE) for TLIO + rot. aug. vs. 2.059 deg AYE for TLIO + O(2) Eq. Frame).\"}", "{\"summary\": \"This work introduces EqNIO, a neural network-based odometry system that enhances localization accuracy using accelerometer and gyroscope data from a single IMU. Traditional neural odometry methods face challenges with generalization, as variations in IMU orientation and motion direction can disrupt displacement predictions. EqNIO overcomes this by training a model with canonical displacement priors, aligning IMU data to a gravity-aligned frame with learnable yaw. This approach ensures that the system\\u2019s outputs are invariant to rotations and reflections in the gravity direction, supporting robust generalization. Through carefully designed layers and an innovative angular rate decomposition, EqNIO can effectively integrate both acceleration and angular data. Tested on TLIO, Aria, RONIN, RIDI, and OxIOD datasets, EqNIO demonstrates superior performance and adaptability over existing methods, marking a step forward in low-drift neural inertial odometry suitable for edge devices.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"I think the method itself looks novel and interesting. It introduces a canonicalization scheme that leverages gravity and an estimated sub-equivariant frame to map IMU measurements into a canonical orientation. This procedure can be flexibly applied to arbitrary off-the-shelf network architectures by mapping the inputs into the canonical space and mapping the outputs back into the original space.\", \"weaknesses\": \"The primary weakness of this paper is the clarity of its writing. I\\u2019m unable to fully understand the major differences between this work and RIO: Rotation-equivariance Supervised Learning of Robust Inertial Odometry.\\n\\nWhile the key idea of this paper is clear, it\\u2019s difficult to discern how it specifically diverges from the previous work. I strongly recommend that the authors begin by clearly outlining the main concepts, followed by a detailed description of the methodology. This structure would greatly help readers in understanding the unique contributions of this work.\", \"questions\": \"What is the roto-reflection group, and why is it important? A more detailed explanation of this concept and its relevance would be helpful.\\n\\nWhat is the PCA(handcrafted equivariant frame)? A more detailed explanation of this concept and its relevance would be helpful.\", \"clarity_in_distinguishing_from_rio\": \"It appears that the figure is intended to convey the core idea of this work. However, the differences between this approach and RIO are unclear\\u2014elaborating on this distinction would strengthen the presentation.\", \"data_specification_in_figure_captions\": \"It would be beneficial if each figure caption specified which data is seen and which is unseen to enhance the reader's understanding. Note, the performance of different method highly depends on how much data is seen or trained.\\n\\nSupplementary material vs. main paper clarity: The supplementary material provides much better clarity than the main paper. Including some of this contextual information directly in the main paper would make it easier for reviewers to follow your method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper tackles neural inertial odometry estimation, and seeks to regularize training by developing a network that is equivariant to rotations over x,y in a gravity-aligned frame. Theoretically, this is a more principle approach that the earlier methods of regularization via data augmentation and the design of the subequivariant layers is a neat contribution. Empirically, this leads to improvement over baselines and the contributions/design choices are well-ablated method. The reviewers were generally positive about this work, and the AC concurs with this and recommends acceptance. The authors are encouraged to make the text a bit more accessible in the final version based on the reviewer comments.\", \"additional_comments_on_reviewer_discussion\": \"R-wVti had clarity related concerns regarding how the method differs from RIO, and the AC believes the author response adequately clarified this (although R-wVti did not update their final rating).\"}", "{\"comment\": \"Respected Reviewer, we greatly appreciate your feedback and comments and have addressed them in our preceding responses. We look forward to your feedback.\"}", "{\"comment\": \"**Q6. What are the performance characteristics under extreme motion scenarios where IMU biases may be significant?**\\n\\nSee W3.\\n\\n**Q7. How does the system behave in situations with poor gravity alignment?**\\n\\nSee W3.\\n\\n**Q8. How does your work relate to recent developments in learning-based inertial attitude estimation?**\\n\\nSee W5.\\n\\n**Q9. Could you elaborate on the connections between EqNIO and current research in equivariant neural networks specifically applied to inertial navigation?**\\n\\nSee W5.\"}", "{\"comment\": \"**W6. The descriptions of related methods (e.g., TLIO, RONIN) are somewhat high-level and lack technical depth. Providing only superficial descriptions may not adequately highlight the nuances that differentiate EqNIO from these methods.**\\n\\nThank you for the suggestion. We have a list of differences between our method and the related methods, as well as include more details of related methods in the revision Appendix A.4, where changes are denoted in red. For specific weaknesses of these approaches that are addressed by our method, please refer to W4.\\n\\n**Q1. Could you provide a technical explanation of how the canonical displacement priors are implemented in practice?**\\n\\nPlease find a technical explanation below, which we incorporated into Appendix A.5.1. \\n\\n(1) Following Fig. 2 (a) we first gravity align the IMU data, i.e. rotate it such that the z-axis is aligned with gravity (estimated online by an EKF)\\n\\n\\n(2) We then process this IMU data by an \\u201cequivariant frame model\\u201d that outputs two vectors. This network takes in vector and scalar features derived from the IMU data. The equivariant network is strictly O(2) or SO(2) equivariant by design. The predicted vectors are then converted into an orthogonal set of unit vectors using Gram-Schmidt orthogonalization.\\n\\n(3) Then, we transform the input IMU data using the equivariant frame from (2) (\\u201ccanon.\\u201d block in Fig. 2(a)) to produce invariant (consistent) inputs within the canonical frame, ensuring robustness to any roto-reflection group transformations applied to the original data.\\n\\n(4) The invariant input is fed into a standard neural network architecture (\\u201coff-the-shelf model\\u201d), such as TLIO's ResNet, to generate a displacement and covariance, termed canonical displacement prior expressed with respect to the learned canonical frame. These outputs remain consistent under transformations from the roto-reflection group applied to the original input.\\n\\n(5) Lastly, we project back the predicted displacement and covariance using the canonical frame to obtain an equivariant displacement and covariances using the equivariant frame of (2).\\n\\nFinally, depending on the backbone architecture on which our framework is applied (in our case for TLIO), the predicted displacement and covariances are fed into a filtering algorithm, like an EKF as a measurement to update the estimate of the IMU state (orientation, position, velocity and biases).\\n\\n**Q2. How exactly do the gravity-aligned frames and equivariant yaw frames work in your model architecture?**\\n\\nFor each 1-second window of IMU data, there is exactly one gravity-aligned frame and one equivariant yaw frame, outlined next: \\n\\nThe gravity-aligned frame is the frame into which the IMU data is transformed before processing by the equivariant frame model. This frame has its z-axis aligned with gravity but is otherwise unconstrained. We map the IMU data into this frame by simply rotating it along the shortest path such that the z-axis points toward gravity. We use the gravity direction estimated by the EKF for this step. \\n\\nThe equivariant yaw frame is defined as a composition of the gravity-aligned frame with a roto-reflection around gravity. It is defined by three axes: the z-axis being the gravity, and the other two being provided by the equivariant frame model. This frame is not necessarily right-handed. After producing these two vectors, we express the IMU data in this new frame, which we call the canonical frame. We then run the neural network and produce an invariant covariance and displacement (i.e. a canonical displacement prior). We then project the outputs of this network using this yaw frame into the original gravity-aligned frame. This entire process enhances the generalization of the network due to the reasons discussed in W1. For more details of the implementation of the canonical frame please refer to Q1.\\n\\n**Q3. What specific mechanisms in your learnable yaw orientation approach help address the yaw drift problem?**\\n\\nSee W4.\\n\\n**Q4. Could you provide experimental evidence demonstrating how your method improves yaw estimation compared to existing approaches?**\\n\\nAs mentioned previously in W2, our method produces more accurate displacement measurements than previous approaches, highlighted by the reduced ATE Tables 2 and 3. This improvement in displacement measurements should imply an improvement in yaw estimation due to the measurement equation\\u2019s dependence on the EKF yaw state (see Appendix A.6.4, A.6.5 for the Jacobian of the measurement with respect to the yaw). However, in practice, we only find a small improvement in terms of average yaw error on Aria (Tab. 3 see 2.073 deg average yaw error (AYE) for TLIO + rot. aug. vs. 2.059 deg AYE for TLIO + O(2) Eq. Frame). Since the Aria dataset is out of distribution with respect to the TLIO training set, this also highlights the slightly superior generalization ability of our method.\\n\\n**Q5.How does your model perform under varying IMU sampling rates?**\\n\\nSee W3.\"}", "{\"summary\": \"The authors propose a method to adapt existing inertial odometry (IO) architectures to be invariant to the IMU orientation. This is done by making use of an $O_g(3)$/$SO_g(3)$ equivariant network that transforms the gravity-aligned IMU measurements to a canonical frame as a pre-processing step for non-equivariant IO. The predicted displacement and covariance from IO for these canonicalized measurements are then transformed back to the source frame using the inverse canonical frame. The proposed method leads to improved accuracy while maintaining comparable runtime and can in principle be applied to any IO method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This is an interesting and novel application of equivariant networks to an under-explored but useful problem. The canonicalization approach is a good choice for this problem as it adapts existing sota IO architectures and keeps the pipeline interpretable.\", \"The symmetry of the problem in terms of $O_g(3)$ equivariance (which is $O(3)$ subequivariant) is well presented. Care has been taken to consistently process the IMU measurements and specialized $O(2)$/$SO(2)$ architectures based on vector neurons have been developed, while more architectures are possible.\", \"The approach was tested on two IO architectures which showed accuracy improvements on many datasets, with comprehensive ablation studies, while keeping the inference times comparable.\"], \"weaknesses\": [\"The general canonicalization scheme has been proposed before by Kaba, S\\u00e9kou-Oumar, et al. \\\"Equivariance with learned canonicalization functions.\\\" ICML 2023, and cannot not be presented as a contribution. The original work must be cited.\", \"While there is a reduction in drift compared to the baselines, the remaining drift is still significant (>2m) which suggests that the main problem in IO is not in exact equivariance but elsewhere (most likely sensor noise).\"], \"questions\": [\"Why is this canonical. equiv. scheme chosen over other equiv. choices? e.g. frame averaging (Puny et al. ICLR '22) also allows adapting existing non-equiv. architectures.\", \"I'm confused about the choice of metrics, especially for the TLIO experiments. From the definitions in A.5 (I believe squared norm is missing), it seems that MSE is just sqrt(ATE)? But the numbers don't reflect this. And I also think ATE, RTE, AYE would be sufficient. Do you do SE3 alignment with the GT trajectories?\"], \"minor_non_critical_comments\": [\"Could you elaborate on the yaw augmentation procedure used for TLIO / RoNIN?\", \"It is surprising to me that despite requiring 10x more FLOPs than the non-equiv. architectures, there is barely any increase in runtime (<1 ms). Since there is no code release, can you comment more on the reasons for this efficiency?\", \"Writing: In Fig. 3b it is not clear what 'rot. sense' means; explain how the frame is constructed from the network outputs with gs-orth. for sake of clarity; Typo in conclusion: \\\"respects eliminates\\\"; Would be helpful to indicate that * means no-EKF in the table 2,3 captions or simply remove the * since it is not applicable to RoNIN.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Respected Reviewer, we greatly appreciate your feedback and comments and have addressed them in our preceding responses. We look forward to your feedback.\"}", "{\"comment\": \"**Q2. What is the PCA(handcrafted equivariant frame)? A more detailed explanation of this concept and its relevance would be helpful.**\\n\\nPCA refers to the method of finding the principal axes, i.e. axes which exhibit the highest and lowest variance in the $a_{xy}$, $v_{1,xy}$ and $v_{2,xy}$ components of the $n$ IMU measurements, stacked into a $(3n)\\\\times 2$ matrix $X$. When a yaw transformation acts on the data, $X$ transforms as $XR^T$. PCA performs a Singular Value Decomposition (SVD) on $X = U\\\\Sigma V^T$, where $U$, $V$ are orthogonal, $V$ is a $2\\\\times 2$ matrix. In this case, we can use V. V is an equivariant frame because under yaw transformations $XR^T$ we obtain $U\\\\Sigma V^T R^T$, so $V$ becomes $RV$ after transformation of the data, i.e. it transforms equivariantly under yaw transformation. We say that such an equivariant frame is handcrafted since it is not learned from a data set. The reason why we do not use PCA is that the PCA decomposition is ambiguous up to the sign of each principal axis (eigenvector). This means that small noise perturbations can cause discontinuous changes in the frame. For this reason, canonicalizing with PCA frames results in poorer performance as compared to our method.\\n\\n**Q3. Clarity in distinguishing from RIO: It appears that the figure is intended to convey the core idea of this work. However, the differences between this approach and RIO are unclear\\u2014elaborating on this distinction would strengthen the presentation.**\\n\\nSee W1. \\n\\n**Q4.Data specification in figure captions: It would be beneficial if each figure caption specified which data is seen and which is unseen to enhance the reader's understanding. Note, the performance of different method highly depends on how much data is seen or trained.**\\n\\nThe confusion is our fault and we corrected it in the revision Section 5, Table 2, and Appendix A.3. Seen and unseen labels refer to specific subsets of the RONIN test dataset. Specifically, RONIN-U (Unseen) is a test set that contains IMU measurements from people who did not participate in the training and validation data collection. The people used to record the RONIN-S (Seen) overlap with those from the training and validation set, but their data is disjoint from the training and validation set. The RONIN-U dataset, thus, tests the generalization capabilities of a method.\\n\\nTo the point of \\u201cwhich data is seen and unseen\\u201d, we believe the reviewer is referring to the training, validation (seen), and test (unseen) datasets in each case, which we gladly added to the captions. For each table, the listed datasets were not seen by any method as they constitute the test set. For training, Tab. 1 and 3, and Figs. 4-7 each method uses the TLIO Dataset, and for Tab. 2 each method uses 50% of the RONIN training set (since only 50% is public), and we include the original RONIN result using 100% of the training data.\\n\\nWe added these clarifying details in the revision Section 5, Table 2, and Appendix A.3.\\n\\n**Q5. Supplementary material vs. main paper clarity: The supplementary material provides much better clarity than the main paper. Including some of this contextual information directly in the main paper would make it easier for reviewers to follow your method.**\\n\\nWe appreciate your positive feedback on the supplementary material and we have now enriched our manuscript with more contextual information and details in the revision (all changes are indicated in red) while also staying within the page limit.\"}", "{\"title\": \"Global Comment\", \"comment\": \"We thank all reviewers for their insightful and constructive feedback, and helpful comments for improving the clarity of the paper. Below we will first summarize the changes towards a clearer exposition of our work, followed by a rebuttal given to individual reviewers based on their feedback. Finally, we provide the revision of the paper with incorporated changes marked in red.\", \"we_made_the_following_modifications_to_the_manuscript\": \"In the related work section, we cited the work of Kaba et al., which first proposed to achieve equivariance by learning canonicalization. We added a section on canonicalization in Appendix A.1 as suggested by Reviewer cuLt, and added more references on inertial attitude estimation and the broader fields of visual-inertial odometry and SLAM based on the feedback from Reviewer EZfo. As suggested by Reviewer cuLt, we added another ablation comparing our method to frame-averaging (Omri Puny et al., ICLR 2022), which further highlights the benefits of learning the equivariant frame. Following the suggestion by Reviewer wVti, we added a section in Appendix A.4 to provide a detailed comparison of our method with previous works like RIO, TLIO, and RONIN highlighting our main difference, which is exact equivariance as opposed to approximate equivariance. A major difference in our work is the handling of reflection symmetries in addition to simple rotation symmetries around the gravity axis (termed roto-reflections). In particular, we show that reflecting quantities across planes parallel to gravity is a viable symmetry transformation of the input and output, and designing an equivariant network to deal with this symmetry leads to improved generalization of displacement priors. As proposed by Reviewer AwiV, we conduct a more detailed uncertainty analysis of the three different covariance parameterizations explored in our paper. To this end, we quantify the proportion of 3-sigma outliers, and the negative log-likelihood of the test set under predicted means and uncertainties, and include this in the revision Appendix A.17. As recommended by Reviewer EZfo, we conducted a sensitivity analysis by varying the IMU sampling rates and perturbing the IMU biases, and this analysis highlights the robustness of our method.\\n\\nLastly, we thank the reviewers for recognizing the novelty of EqNIO, the thoroughness of the mathematical treatment, and the impact of addressing this problem on the growing field of inertial odometry. In particular, Reviewer cuLt praises our work as *\\u201can interesting and novel application of equivariant networks to an under-explored but useful problem\\u201d*. Reviewers wVti and cuLt recognize the novelty of our method and consider *\\u201cthe canonicalization approach a good choice for this problem as it adapts existing sota IO architectures and keeps the pipeline interpretable\\u201d*. The reviewers also acknowledge our soundness and contribution as seen in the comments of reviewer AwiV who writes *\\u201cThe paper presents several technical contributions, which overall leads to a more generalizable framework for inertial odometry. Mixing the physical properties with a learning-based regression module makes sense, which can boost generalization performance\\u201d* . This sentiment is mirrored by Reviewer EZfo who says *\\u201cThe combination of technical insights and comprehensive empirical validation underlines the paper's contribution to advancing neural inertial odometry, particularly in settings with challenging orientations and device constraints\\u201d*. The Reviewer wVti appreciates the clarity of our paper in their comments by stating the paper is *\\u201cwritten well with clear figures and many intuitive explanations of complex concepts\\u201d*. Reviewer EZfo also writes *\\u201cClarity is maintained through structured explanations of complex mathematical formulations, specifically around group theory and its relevance to sensor data processing\\u201d*. Reviewer cuLt writes *\\u201cThe symmetry of the problem in terms of O_g(3) equivariance (which is O(3) subequivariant) is well presented\\u201d*.\"}", "{\"comment\": \"**W1. It is not clear if ICLR is the best venue for such research, since learning components here is rather limited to a regression module.**\\n\\n Equivariance has been a prevalent topic in ICLR. Because the canonicalization is a preprocessing step and yields a group action, the data transformed with this action can be fed into any architecture beyond regression. \\n\\n**W2. Uncertainty modelling assumes diagonal covariance. Validity of these assumptions are tested by looking at the final performance that it helps. Perhaps an in-depth analysis on this step could help, despite not the core focus of the paper. For example, there has been many evaluation tools from uncertainty quantification literature, and can be presented here.**\\n\\nWe apologize for not formulating this in a clear way. The covariance is diagonal in the canonical frame but after back-transforming it with a roto-reflection (predicted by the canonicalization) the resulting covariance becomes non-diagonal. This is a feature of our system that the meaning of our canonical frames is the orientation of the covariance matrix. \\n\\nIn addition to supporting this design choice empirically by running our neural network (see Tab. 3), we add additional statistical analyses below, using two more techniques: (1) we report the percentage of displacement predictions by our network that have an error outside of the 3-sigma bound, in x,y, and z direction (denoted $\\\\delta_x,\\\\delta_y,\\\\delta_z$ in percent), where sigma is given by our network, and (2) we compute the median negative log-likelihood (median NLL) of the ground truth samples given the mean and covariance provided by our method. All results were calculated in the canonical frame and pertain to the Aria Dataset.\\n\\nIn addition, we highlight the importance of learning the covariance, by introducing two additional baselines: First, similar to Liu et al 2020, we use our pre-trained method to generate displacement measurements, but use a constant isotropic covariance (sigma=0.01) instead of the learned one (indicated with +constant cov), and secondly we train our model only on MSE for 20 epochs (indicated with (mse)+constant cov) and deploy it with the same constant covariance.\\n\\n\\n| TLIO \\t\\t\\t\\t\\t| $\\\\delta_x$ | $\\\\delta_y$ | $\\\\delta_z$ | median NLL | \\n| ------- | ----------------- | ----- | ----- | ----- |\\n| + O(2) Eq. Frame +S \\t\\t| 3.49 | 3.35 | 7.75 | -7.0551 |\\n| + O(2) Eq. Frame \\t| 0.59 | 0.71 | 0.51 | -7.3191 |\\n| + O(2) Eq. Frame +P \\t| 0.00 | 0.96 | 0.76 | -4.1156 |\\n| + O(2) Eq. Frame + constant cov \\t| 2.18 | 2.20 | 0.46 | -4.1111 |\\n| + O(2) Eq. Frame (mse) + constant cov \\t| 0.72 | 0.72 | 0.04 | -3.9189 |\\n\\nWe see that TLIO + O(2) Eq. Frame, TLIO + O(2) Eq. Frame + P and TLIO + O(2) Eq. Frame (mse) + constant cov, all show low outlier counts. However, low outlier counts can also be achieved by predicting high covariances. However, this strategy increases the median NLL, as seen in TLIO + O(2) Eq. Frame + P and TLIO + O(2) Eq. Frame (mse) + constant cov, showing that they overestimate covariances. Our method (TLIO + O(2) Eq. Frame) shows the highest median NLL overall methods, with low outlier counts.\\n\\nFor completeness, we also report the tracking performance of the new methods below\\n| TLIO | TLIO ATE | TLIO RTE | TLIO AYE | Aria Avg ATE | Aria Avg RTE | Aria Avg AYE |\\n|--------------------------------------|-----------|-----------|-----------|--------------|--------------|--------------|\\n| + O(2) Eq. Frame +S | 1.4836 | 0.4623 | 2.3902 | 1.1752 | 0.4211 | 2.0433 |\\n| + O(2) Eq. Frame | 1.4328 | 0.4583 | 2.3894 | 1.1181 | 0.4159 | 2.0592 |\\n| + O(2) Eq. Frame +P | 1.8267 | 0.5776 | 2.5342 | 1.7546 | 0.5636 | 2.2234 |\\n| + O(2) Eq. Frame + constant cov | 1.6691 | 0.5063 | 2.4811 | 1.6801 | 0.5335 | 2.1971 |\\n| + O(2) Eq. Frame (mse) + constant cov | 2.8827 | 0.7988 | 2.4769 | 2.0319 | 0.6521 | 2.2244 |\\n\\nWe see that, first, O(2) Eq. Frame + constant cov performs worse than our method, indicating that our learned covariance adapts to the specific learned displacements. Second, we see that (mse)+constant cov performs even worse. This is likely due to the network being overconfident in its displacement prediction before MLE finetuning, which results in significant outlier prediction.\\n\\nWe have included this analysis in Appendix A.17 of the revision. We look forward to your suggestions to further analyze the various covariance parameterizations used in the paper.\"}", "{\"comment\": \"Thank you for addressing all my questions and comments.\\n\\nThe accuracy improvements still seem marginal to me (ATE / RTE are more informative performance metrics than the MSE loss) compared to the increased complexity and run-time (on-device run-time will be even higher). Nonetheless, this work is still an interesting and novel application of learned canonicalization that is well developed and has potential for future work. Thus I have increased my score from 5 to 6.\"}", "{\"comment\": \"**Q2. I'm confused about the choice of metrics, especially for the TLIO experiments. From the definitions in A.5 (I believe the squared norm is missing), it seems that MSE is just sqrt(ATE)? But the numbers don't reflect this. And I also think ATE, RTE, AYE would be sufficient. Do you do SE3 alignment with the GT trajectories?**\\n\\nThanks for pointing out the typo. Here, the MSE is, as stated in the text, the mean of the square error of displacement predictions $\\\\hat{d}_i$ over 1s-window, and not the position $\\\\hat{p}_i$. Therefore the definition of the error should be $mean_i ||d_i -\\\\hat{d}_i||^2$, where $||.||$ is the Euclidean norm. We have corrected this in the revision Appendix A.7.\\n\\n**Minor non-critical comments:**\\n \\n**M1. Could you elaborate on the yaw augmentation procedure used for TLIO / RoNIN?**\\n\\n We follow the procedure in Liu et al., 2020, which randomly rotates the data associated with 1s of IMU data and the corresponding ground truth displacement around the yaw direction of the local gravity-aligned frame (z-axis, corresponding with gravity) during training. The yaw angle is sampled from the range $[-\\\\pi, \\\\pi]$.\\n\\n**M2. It is surprising to me that despite requiring 10x more FLOPs than the non-equiv. architectures, there is barely any increase in runtime (<1 ms). Since there is no code release, can you comment more on the reasons for this efficiency?**\\n\\nWe apologize for this confusion and have noticed that there was an error in timing. The corrected runtimes are as follows: TLIO takes 2.79 ms, TLIO + SO(2) Eq. Frame takes 5.43 ms and TLIO + O(2) Eq. Frame takes 5.70 ms per inference. We incorporated this change in Appendix A.5.2 in our revision and we will release the code upon acceptance. \\n\\n**M3. Writing: In Fig. 3b it is not clear what 'rot. sense' means;**\\n\\nRot. sense stands for the sense of rotation, i.e., the direction of spinning implied by the direction of the arrow, which follows the right-hand rule.\\n\\n**explain how the frame is constructed from the network outputs with gs-orth. for sake of clarity;**\\n\\nWe predict equivariant vector features with two channels, which can be interpreted as two vectors $v_1$ and $v_2$. The gs-orth. module operates in three steps following Gram-Schmidt orthogonalization: First, it normalizes $v_1$ resulting in $f_1$. Then it subtracts the component of $v_2$ which is parallel to $v_1$ (i.e. $v_2^*=v_2 - <f_1,v_2>f_1$). Finally, it normalizes $v_2^*$ resulting in $f_2$. The frame $F = [f_1, f_2]$ is the desired equivariant, orthogonal frame.\\n\\n**Typo in conclusion: \\\"respects eliminates\\\";**\", \"the_correct_sentence_should_read\": \"\\u201cOur canonicalization scheme eliminates the underlying yaw ambiguity in gravity-aligned frames which arise from roto-reflections in the plane around gravity.\\u201d We have updated this in revision Section 7.\\n\\n**Would be helpful to indicate that * means no-EKF in the Table 2,3 captions or simply remove the * since it is not applicable to RoNIN.**\\n\\nThank you for the suggestions and we fixed this in the revision Tables 2 and 3 to consistently indicate no-EKF with *.\"}" ] }
C85eSjKenO
Tensor-GaLore: Memory-Efficient Training via Gradient Tensor Decomposition
[ "Robert Joseph George", "David Pitt", "Jiawei Zhao", "Jean Kossaifi", "Cheng Luo", "Yuandong Tian", "Anima Anandkumar" ]
We present Tensor-GaLore, a novel method for efficient training of neural networks with higher-order tensor weights. Many models, particularly those used in scientific computing, employ tensor-parameterized layers to capture complex, multidimensional relationships. When scaling these methods to high-resolution problems makes memory usage grow intractably, and matrix based optimization methods lead to suboptimal performance and compression. We propose to work directly in the high-order space of the complex tensor parameter space using a tensor factorization of the gradients during optimization. We showcase its effectiveness on Fourier Neural Operators (FNOs), a class of models crucial for solving partial differential equations (PDE) and prove the theory of it. Across various PDE tasks like the Navier Stokes and Darcy Flow equations, Tensor-GaLore achieves substantial memory savings, reducing optimizer memory usage by up to 75\%. These substantial memory savings across AI for science demonstrate Tensor-GaLore's potential.
[ "neural operators", "PDE", "optimization", "pre-training", "Large scale training", "AI4Science" ]
Reject
https://openreview.net/pdf?id=C85eSjKenO
https://openreview.net/forum?id=C85eSjKenO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rWOU5YVjmJ", "oxsgryEAV4", "lExPhgbcsq", "kwZ543hD5X", "kIQHoo0W7b", "dZ5hNskcIh", "cm4CBQdoTz", "bLXNlSdwFM", "aY6PCmhuFD", "aAHXfaLaUb", "ZhxUCbY65t", "YIgz3i9Z4e", "WveGnC47pK", "WETyNnHXKP", "VQBovV4lHP", "UzWMPo9wy9", "SSQQCJFnvj", "PdvFw8jREc", "LmPDHRFIy9", "J8Ej7wfpCC", "DkhWVvDhNg", "CPzS1KAd4Y", "AZ1jaebwPI", "8jMQbzKvDo", "7AN9s909hN", "66YuMcnIOr", "57QmGMwXbS", "4NfF0Mjckg", "1fVLE3f52v" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731896808480, 1731898645533, 1731897565323, 1733087697564, 1731900439136, 1730061808051, 1731900740973, 1733253779434, 1731899033672, 1730700629834, 1732388836428, 1734894335824, 1737523893537, 1733087703716, 1730691383806, 1731901139425, 1731898087828, 1731899716357, 1733087684261, 1731898343118, 1733199687642, 1732388812318, 1730707002399, 1732388802487, 1733087692050, 1733268607730, 1732559659775, 1733269234276, 1732388790764 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Reviewer_mEXm" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Reviewer_RFRZ" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Area_Chair_JFKu" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Reviewer_oxLj" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Reviewer_oxLj" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Reviewer_po1K" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ], [ "ICLR.cc/2025/Conference/Submission8197/Authors" ] ], "structured_content_str": [ "{\"title\": \"General Response\", \"comment\": \"### Thank you to all the reviewers for the thoughtful and insightful feedback on our paper. We greatly appreciate the time and effort you have put into reviewing our work. We have uploaded a revised version, which is more thorough and has more details to some of the reviewer's questions, along with over 20 pages of the theory of our method attached to it.\\n\\n**In this paper, we have proposed Tensor-GaLore, a novel method for efficient training of Fourier Neural Operators (FNOs), a class of models crucial for solving partial differential equations (PDEs) in scientific computing. Our contributions include**\\n\\n1. We propose **Tensor-GaLore, which leverages tensor decomposition techniques to project the high-dimensional tensor gradients onto low-rank subspaces, enabling substantial memory savings without compromising model performance**. Specifically, Tensor-GaLore utilizes the Tucker decomposition to decompose the gradient tensors into a core tensor and factor matrices. This allows us to project the gradients onto a low-rank subspace and perform the optimization in this compact representation, leading to significant reductions in memory usage for the optimizer states.\\n\\n2. We now provide a **comprehensive theoretical analysis of Tensor-GaLore, proving both convergence guarantees and the emergence of the low-rank structure during training for FNOs**. Our theory shows that:\\n\\n 2.1. The gradient tensors naturally develop low-rank structure in each mode during training\\n\\n 2.2. Tensor-GaLore achieves convergence through mode-wise projections while preserving the multidimensional relationships\\n\\n 2.3. We prove the convergence of Tensor-GaLore under mild conditions on the mode-k continuity of the operators\\n\\n 2.4. Our theoretical results explain why the low-rank approximation works well in practice\\n\\n3. We demonstrate that **Tensor-GaLore can achieve over 75% reduction in optimizer state** compared to the original FNO model while maintaining or even improving the model's performance across a diverse set of PDE tasks, including Navier-Stokes, Burgers' equation, and electromagnetic wave propagation. The ability to drastically reduce the memory footprint of FNOs is a crucial advancement that enables the application of these powerful models to high-resolution scientific computing problems. We carefully analyze the computational overhead introduced by the tensor decomposition operations. While Tensor-GaLore does introduce some computational overhead (5-20% slowdown depending on the configuration), we show that:\\n\\n 3.1. The slowdown is modest compared to the significant memory savings achieved. The time cost for gradient projection remains constant regardless of input size, while the forward pass, backward pass, and gradient computation times scale linearly with input size. **This leads to decreasing relative slowdown as problem size increases**\\n\\n 3.2. The overhead can be amortized through techniques like \\\"warm-restart\\\" initialization of tensor decomposition\\n\\n 3.3. The trade-off between memory savings and computational cost can be controlled through the choice of rank parameters\\n\\n 3.4. For many scientific computing applications, the memory reduction enables training of larger models that would otherwise be impossible, justifying the modest increase in computation time\\n\\nWe hope that these contributions together provide both theoretical understanding and practical benefits for efficiently training large-scale neural operators. The memory savings and theoretical guarantees make Tensor-GaLore a valuable tool for advancing AI in scientific computing.\"}", "{\"title\": \"Response to Reviewer mEXm (Part 2)\", \"comment\": \"**I am not sure about the novelty in the manuscript, it seems that the authors have taken a FNO example and then sold the tensor compression of the weights as something that does not require SVDs (which I am sure are under the hood).**\\n\\n**Answer:** Again we thank the reviewer for bringing up this point. First, it is important to emphasize the critical role of tensor-structured weights in FNOs, the key focus of our work. FNOs are a class of neural network architectures designed to learn complex, multidimensional mappings between function spaces, which are crucial for solving parametric partial differential equations (PDEs). The weight tensors in FNOs can have orders as high as 4 or 5, capturing intricate relationships between spatial, temporal, and channel dimensions.\\n\\nEffectively handling these high-order tensor weights is essential for the success of FNOs in scientific computing applications. Flattening these tensors into matrices, as done in the matrix-based GaLore approach, can lead to a significant loss of important dimension-specific information, compromising the model's ability to capture the underlying physical phenomena. This is a key limitation that Tensor-GaLore aims to address. Now, to the reviewer's point about the Tensor-GaLore approach not being novel because it still relies on SVD computations in the background: while we disagree with this, please look at the previous response to your question for the detailed answer on how tucker decomposition is essentially higher order SVD. We do not claim to re-invent a new decomposition, but showcase why GaLore is not suitable for FNOs and higher-order tensors and prove the theory that tucker decomposition is good for it.\\n\\nLastly, in our new revised version, more details have been added. Tensor-GaLore is not merely applying tensor compression to FNO weights - it introduces a fundamentally different approach to gradient optimization by working directly with the natural tensor structure of neural operators through Tucker decomposition. While both SVD and Tucker decomposition use orthogonal matrices, their mathematical properties and applications are distinctly different. Our comprehensive theoretical analysis (Sections H, I, J) rigorously proves this distinction - starting from fundamental tensor operations, through FNO reversibility, to explicit convergence guarantees and characterization of low-rank structure emergence during training. We show that tensor gradients naturally develop mode-specific low-rank structures under mild conditions, with explicit bounds on stable rank evolution. This explains why preserving tensor structure through Tucker decomposition is fundamentally more suitable than matrix-based approaches like SVD, which collapses the multi-dimensional relationships. Our empirical results validate this theory - Tensor-GaLore maintains or improves performance while achieving significant memory savings, whereas matrix approaches can actually hurt performance (e.g., -223% on Navier-Stokes with GaLore).\"}", "{\"title\": \"Response to Reviewer po1K\", \"comment\": \"**Despite the novel application, the approach is a somewhat straight-forward extension of GaLore to tensor-weight models, replacing SVD decomposition with Tucker.**\\n\\n**Answer:** We thank the reviewer for their concern. While GaLore operates on weight matrices and uses SVD to project gradients onto low-rank subspaces, the key challenge in applying this to tensor-weight models is the loss of important multidimensional structures and relationships. Directly applying GaLore by flattening tensor weights into matrices can discard crucial information about the different tensor dimensions, such as spatial, temporal, or channel relationships. Tensor-GaLore addresses this by leveraging tensor decomposition, specifically the Tucker decomposition, to project gradients while preserving the intricate higher-order structure of the tensor weights. This allows Tensor-GaLore to better capture the complex, multi-scale relationships in scientific computing applications like neural operators.\\n\\nPlease check the general response for the more detailed revision. These technical innovations and the theory accompanying them, combined with the unique challenges of tensor-weight models, make Tensor-GaLore a meaningful advancement over the original GaLore approach rather than a straightforward extension, especially in the context of tensor-based models like Neural Operators, which are huge in AI4Science. We are happy to answer any follow-up questions.\\n\\n**There is a lack of discussion on the slowdown in training, given the overhead.**\\n\\n**Answer:** While Tensor-GaLore does introduce additional computational overhead from the tensor decomposition step, we have carefully analyzed the impact on training speed and efficiency. Our experiments have shown that the memory savings achieved by Tensor-GaLore often outweigh the slight increase in computational cost, resulting in an overall improvement in training time and resource utilization. Specifically, we have measured the training time for Tensor-GaLore compared to the baseline FNO model and the GaLore approach. Our results indicate that the slowdown in training time is modest, typically in the range of 5-20% depending on the dataset and model configuration. This is a reasonable trade-off given the significant memory savings (up to 75% reduction in optimizer memory) that Tensor-GaLore provides. Here attaches the detailed slowdown data on NS 128 Resolution:\\n\\n| Model | Rank | Time/epoch(s) | Slowdown (%) |\\n|-------|------|---------------|--------------|\\n| Baseline | 1.0 | 34.96 | -- |\\n| GaLore | 0.20 | 34.47 | -1.40 |\\n| GaLore | 0.25 | 34.79 | -0.48 |\\n| GaLore | 0.50 | 36.27 | 3.75 |\\n| GaLore | 0.75 | 37.50 | 7.26 |\\n| Tensor-GaLore (40, 40, 40, 24) | 0.20 | 36.53 | 5.98 |\\n| Tensor-GaLore (48, 48, 48, 24) | 0.25 | 38.30 | 10.08 |\\n| Tensor-GaLore (56, 56, 56, 24) | 0.50 | 40.63 | 12.03 |\\n| Tensor-GaLore (64, 64, 56, 32) | 0.75 | 44.93 | 19.84 |\", \"key_observations_from_this_data\": [\"Baseline execution time: ~35s per epoch\", \"GaLore shows minimal slowdown (and even speedup at low ranks)\", \"Tensor-GaLore has moderate slowdown:\", \"5-10% at low ranks (0.20-0.25)\", \"10-20% at higher ranks (0.50-0.75)\", \"Trade-off between compression (rank) and computational overhead is evident\", \"The overhead is reasonable given the substantial memory savings (up to 75%)\", \"However we did another ablation. The time cost for gradient projection remains constant regardless of input size, while the forward pass, backward pass, and gradient computation times scale linearly with input size. This leads to decreasing relative slowdown as problem size increases: 20% slowdown for 128 resolution, 10% for 256, and only 6-7% for 512. This can be formulated as slowdown = gradient_project/(Input * (forward + backward + gradient)), explaining why the overhead becomes increasingly negligible for larger problems - precisely where memory savings are most crucial.\", \"Moreover, we have incorporated techniques such as \\\"warm-restart\\\" initialization of the tensor decomposition to amortize the computational overhead across training iterations. This helps minimize the impact on the overall training efficiency. We have also explored opportunities to further optimize the tensor decomposition computations, which could potentially reduce the training time slowdown even further. We acknowledge that the computational overhead is an important consideration, and have provided a more thorough discussion of these trade-offs in the revised version.\", \"*Lastly, if you are happy with the revised version which includes more theory and answered all your questions, it would be great if you could increase the score: ) We would be happy to answer any follow-up questions you have or weakness that concern you. Thank you once again for reviewing our paper.*\"]}", "{\"comment\": \"We are writing to kindly remind you that we posted our response 2 weeks ago. If you have any additional feedback, concerns, or questions regarding our response, we would greatly appreciate hearing from you.\"}", "{\"title\": \"Response to Reviewer oxLj (Part 2)\", \"comment\": \"**The presented method shows good results only on the Darcy flow equation, in the other experiments the improvement is not strong**\\n\\n**Answer** Thank you for raising this point; however, we want to argue that even comparable performance to the baseline is good for us while reducing memory usage by a huge margin. While the improvements on Darcy flow are indeed substantial (48.8% gain), Tensor-GaLore shows consistent and significant improvements across multiple challenging PDE tasks. For electromagnetic wave propagation, we achieve an 11% improvement while reducing memory by 75%. On Burgers' equation, we maintain performance (+5% gain) despite significant memory reduction. Even for the highly complex Navier-Stokes equations, Tensor-GaLore achieves comparable performance (-5.4%) while drastically reducing memory usage, which is remarkable given that the matrix-based GaLore approach significantly degrades performance (-223%) on the same task. We also want to emphasize that the EM dataset involves \\\"complex-valued data inherently\\\", which adds an additional layer of complexity compared to the real-valued data in the other experiments. Modeling the propagation of optical pulses in a nonlinear waveguide with second-order nonlinearity is a highly complex physical phenomenon that requires careful handling of the complex-valued electric field envelope. Achieving an 11% improvement in test loss on this complex, real-world EM dataset is a significant accomplishment, as we point out that it is \\\"the first of its kind in the field.\\\" Previous neural operator approaches may not have been able to effectively capture the intricate, multidimensional relationships and complex-valued nature of this problem.\\n\\nIt's crucial to understand that the primary goal of Tensor-GaLore is not to necessarily outperform baseline models in terms of generalization error, but to enable training of larger, more complex models that would otherwise be impossible due to memory constraints. The fact that we can achieve comparable or better performance while reducing optimizer memory usage by up to 75% is a great achievement - it means we can scale to higher resolutions (like our 1024x1024 Navier-Stokes experiments) or more complex architectures that were previously intractable on smaller GPUs. Even in cases where we might need to train slightly longer or see a small trade-off in performance, the ability to fit these models in memory at all represents a crucial advancement for scientific machine learning. \\n\\nOur ablation studies (Table 6) on Navier-Stokes at 128 resolution provide even more compelling evidence of Tensor-GaLore's effectiveness. Across different rank configurations and matricization strategies, Tensor-GaLore consistently outperforms both baseline and GaLore variants. For instance, with rank ratio 0.25, Tensor-GaLore achieves a test L2 loss of 1.297 compared to GaLore's 2.341 and 9.019 for different matricization approaches. This demonstrates that preserving tensor structure is crucial for performance. Furthermore, our method scales effectively to the challenging 1024x1024 resolution case, where the memory savings become even more critical for practical deployment.\"}", "{\"summary\": \"The authors present Tensor-GaLore as a method for compressing the weights using tensor compression. They show the use of their approach on Fourier Neural Operators (FNOs) used for solving PDEs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The focus on efficient learning techniques for PDEs is very timely and requires constant improvement to outperform traditional methods in terms of accuracy and computing time.\", \"weaknesses\": [\"the authors discuss that TensorGalore is superior to Galore as it avoids SVDs. Nevertheless, the computation of tensor formats (Tucker, Tensor Train, etc.) relies on matricizations of the tensor and then typically singular value decomposition of those. So the SVD is still at the heart of TensorGalore.\", \"I think the idea of the manuscript is based on taking an FNO example and then using the tensor compression of the weights. I believe this to work well but it also seems a straightforward extension of previous work. There have been many applications of tensors for compression within neural networks.\"], \"questions\": [\"How do the authors implement their tensor format as they argue that the disadvantage of Galore is the need for the SVD, which typically used for efficient computations of popular tensor formats?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer oxLj (Part 3)\", \"comment\": \"**Since this paper describes improvements to the GaLora method (which has been published previously), it would be fair to compare against it rather than baseline. For example, in Table 2 for the Darcy equation the presented method is 48.8% better than baseline, while the regular GaLora is 19% better than baseline. Thus, the improvement presented in this paper is a 25% improvement.**\\n\\n**Answer: ** We appreciate this perspective about comparison metrics, but we believe comparing to baseline is appropriate for several reasons. First, our comprehensive theoretical analysis now proves why tensor-based optimization is fundamentally more suitable than matrix approaches for neural operators. We show that preserving tensor structure is crucial because gradients naturally develop mode-specific low-rank structures, with explicit bounds on stable rank evolution. This theory explains why matrix-based GaLore, which collapses multi-dimensional relationships, can actually harm performance on complex PDE tasks.\\n\\nThis theoretical insight is strongly validated by our empirical results. On the challenging Navier-Stokes equations, matrix-based GaLore significantly degrades performance (-223% compared to baseline), making it an inappropriate comparison point. Our ablation studies (Table 6) further demonstrate this - across different matricization strategies, GaLore consistently performs poorly (test L2 losses of 2.341-9.019) compared to both baseline and Tensor-GaLore (1.297) at rank 0.25. This performance gap becomes even more pronounced as problem complexity increases. We are currently conducting additional experiments on higher-resolution Navier-Stokes problems (beyond 1024x1024) and early results suggest that preserving tensor structure becomes increasingly critical as the physics becomes more complex. This aligns with our theory - higher-order relationships in the parameter space are fundamental to capturing complex physical phenomena, and these relationships are lost when forcing tensor parameters into matrix form. We look forward to sharing these additional results, which we believe will further demonstrate why baseline, rather than matrix GaLore, is the appropriate comparison point for evaluating tensor-structured optimization approaches.\\n\\n**Can one pick the efficient rank ratio in advance?**\\n\\n**Answer:** We acknowledge that this is an important consideration, as the choice of rank ratio can significantly impact the performance and memory savings of our method.\\n\\nAs we've mentioned in the paper, we recognize that adaptively selecting the optimal rank ratio is a challenging problem, as it likely depends on the specific characteristics of the problem domain and the complexity of the underlying PDE or physical system being modeled. However, based on our experimental results, we have observed that a rank ratio around 25-50% of the original weight tensor size tends to work well across a variety of PDE tasks. This suggests that starting with a rank ratio in this range can be a reasonable initial choice.\\n\\nWe agree with the reviewer that the specific rank ratio choice may also depend on the domain expert's knowledge about the problem at hand. For example, if the dataset is known to represent a highly complex, turbulent physical system, a higher rank ratio may be warranted to capture the necessary level of detail and multiscale interactions. Conversely, for simpler PDE problems, a lower rank ratio may be sufficient to achieve good performance with significant memory savings.\\n\\nWe also want to highlight that the Tensor-GaLore approach allows for flexibility in the rank selection, as we can apply different ranks to different dimensions in the tensor. Additionally, we acknowledge that developing more advanced rank selection mechanisms, such as adaptive or automated techniques, is an area for future research and was not the primary focus of the current work. We are actively exploring these directions to enhance the capabilities of Tensor-GaLore further.\\n\\n**How is it that for Burgers Equation in Table 2 test loss is much (by an order of magnitude) less than train loss?**\\n\\n**Answer**: We apologize, you're absolutely right. Thank you for catching it. Upon further inspection, it appears the train loss values reported for the Burgers' Equation in Table 2 were incorrect. We have updated the results in the table, whose values are as follows:\\n\\n| Model | Rank | Memory | Train | Test H1 | Test L2 | Gain (%) |\\n|-------|------|---------|-------|----------|----------|-----------|\\n| Baseline | 1.0 | 3.94 | 0.0064 | 0.0050 | 0.0026 | / |\\n| GaLore (d=2) | 0.5 | 3.88 | 0.0052 | 0.0100 | 0.0062 | -250 |\\n| Tensor-GaLore | 0.5 | 3.87 | 0.0026 | 0.0041 | 0.0025 | +5 |\\n\\nAs you rightly pointed out, the test loss values are no longer orders of magnitude lower than the training loss. This was an error on our part in the initial reporting of the results. Similarly for table 4 is updated in the paper and all the other minor mistakes you pointed out.\"}", "{\"title\": \"Response to Reviewer oxLj\", \"comment\": \"First, we want to thank the reviewer for their careful consideration of our work. While we appreciate their perspective, we respectfully disagree with the assessment that these contributions are incremental or primarily engineering-focused. Let us address the key points:\\n1. Novel Tensor Theory: Our work introduces fundamental theoretical results about tensor gradient structure that go beyond simple extensions of matrix theory. Specifically, we prove that neural operator gradients naturally develop low-rank structure in each tensor mode during training (Lemma 10), which is a non-trivial extension of matrix gradient analysis. Moreover in the updated revision in section 3.2 we do present the main results while in the Appendix all the general proofs and background material.\\n\\n2. Explicit Comparison with GaLore: Thank you for this point, although we cannot update the paper, we provide a rigorous theoretical analysis showing why Tensor-GaLore is fundamentally superior to GaLore for tensor-structured models. Our key lemma proves that GaLore cannot achieve simultaneous low-rank structure across all modes due to matricization, while Tensor-GaLore can. This is not just an engineering improvement but a fundamental mathematical insight about tensor optimization. We showcase the proof below\\n\\n**[Lemma] Tensor-GaLore vs GaLore Rank Structure:** Consider a gradient tensor $\\\\mathcal{G}_t \\\\in \\\\mathbb{R}^{N_1 \\\\times N_2 \\\\times N_3 \\\\times N_4}$ following the parametric form:\\n$\\n\\\\mathcal{G}t = \\\\frac{1}{N}\\\\sum{i=1}^N (\\\\mathcal{A}_i - \\\\mathcal{B}_i \\\\times_1 \\\\mathcal{W}_t \\\\times_2 \\\\mathcal{C}_i)\\n$\\nwhere $\\\\mathcal{B}_i$ and $\\\\mathcal{C}_i$ are mode-k PSD for all modes k. Let:\\n\\n(a) GaLore with matricization along dimension d unfold $\\\\mathcal{G}_t$ to $G_t^{(d)} \\\\in \\\\mathbb{R}^{N_d \\\\times (N_1N_2N_3N_4/Nd)}$\\n\\n(b) Tensor-GaLore preserve the tensor structure and apply mode-wise projections\", \"then\": \"1. Under GaLore with any dimension d:\\n $\\n \\\\exists k \\\\neq d: \\\\lim{t \\\\to \\\\infty} sr_k(\\\\mathcal{G}_t) \\\\geq \\\\min(Nk/2, N')\\\\\\n $\\n where $N'$ is the rank of the training data.\\n\\n2. Under Tensor-GaLore:\\n $\\n \\\\forall k: \\\\lim{t \\\\to \\\\infty} sr_k(\\\\mathcal{G}_t) \\\\leq N_k/2\\n $\\n\\nThat is, GaLore cannot achieve low rank in all modes simultaneously, while Tensor-GaLore achieves low rank across all modes.\\n\\n**Summarized Proof**:\\n1) First, let's analyze GaLore's behavior:\\n\\n a) When GaLore matricizes along dimension d, it reshapes $\\\\mathcal{G}_t$ into matrix $G_t^{(d)}$\\n\\n b) From GaLore paper Lemma B.3, under SGD updates:\\n $\\n sr(Gt^{(d)}) \\\\leq sr(G{t_0}^{\\\\parallel}) + \\\\left(\\\\frac{1-\\\\eta\\\\lambda_2}{1-\\\\eta\\\\lambda_1}\\\\right)^{2(t-t_0)} \\\\frac{\\\\|G0-G{t_0}^{\\\\parallel}\\\\|F^2}{\\\\|G{t_0}^{\\\\parallel}\\\\|_2^2}\\n $\\n\\n c) This rank reduction only applies to the matricized dimension d\\n\\n d) For any other mode $k \\\\neq d$, consider the mode-k unfolding $(\\\\mathcal{G}t){(k)}$\\n\\n e) Due to the parametric form:\\n $\\n (\\\\mathcal{G}t){(k)} = \\\\frac{1}{N}\\\\sum_{i=1}^N ((\\\\mathcal{A}i){(k)} - (\\\\mathcal{B}i){(k)}\\\\mathcal{W}_t^{(k)}(\\\\mathcal{C}i){(k)}^T)\\n $\\n\\n f) The mode-k operator $\\\\mathcal{S}_k$ remains high rank because matricization along d scrambles mode-k structure\\n\\n g) Specifically, if $rank(\\\\{\\\\mathcal{F}_i\\\\}) = N'$:\\n $\\n sr_k(\\\\mathcal{G}_t) \\\\geq \\\\min(N_k/2, N')\\n $\\n\\n2) Now for Tensor-GaLore:\\n\\n a) Each mode k is handled independently with its own projection:\\n $\\n \\\\mathcal{R}_t = \\\\mathcal{G}_t \\\\times_1 P_1^T \\\\times_2 P_2^T \\\\times_3 \\\\cdots \\\\times_d P_d^T\\n $\\n\\n b) From Theorem 2 (proven earlier), under SGD:\\n $\\n \\\\|(\\\\mathcal{R}t){(k)}\\\\|F \\\\leq \\\\left[1-\\\\eta(\\\\kappa{t-1}^{(k)}-L_A^{(k)}-L_B^{(k)}L_C^{(k)}Dk^2)\\\\right] \\\\|(\\\\mathcal{R}{t-1})_{(k)}\\\\|_F\\n $\\n\\n c) From Corollary 2, for each mode k:\\n $\\n sr_k(\\\\mathcal{G}_t) \\\\leq srk(\\\\mathcal{G}{t_0}^{\\\\parallel}) + \\\\left(\\\\frac{1-\\\\eta\\\\lambda_2^{(k)}}{1-\\\\eta\\\\lambda_1^{(k)}}\\\\right)^{2(t-t_0)} \\\\frac{\\\\|\\\\mathcal{G}0-\\\\mathcal{G}{t_0}^{\\\\parallel}\\\\|F^2}{\\\\|\\\\mathcal{G}{t_0}^{\\\\parallel}\\\\|_2^2}\\n $\\n\\n d) Therefore $sr_k(\\\\mathcal{G}_t) \\\\leq N_k/2$ for large t, for all modes k simultaneously\\n\\nThe key insight is that matricization in GaLore fundamentally cannot preserve low-rank structure in all modes simultaneously, while the tensor approach of Tensor-GaLore naturally handles each mode's rank structure independently and optimally.We indeed do not need any conditions cause we exploit the tensor structure naturally in FNOs compare to GaLore.\"}", "{\"title\": \"Response to Reviewer mEXm (Part 3)\", \"comment\": \"**How do the authors implement their tensor format as they argue that the disadvantage of Galore is the need for the SVD, which typically used for efficient computations of popular tensor formats?**\\n\\n**Answer:** We thank the reviewer for their question! Let us start explaining the FNO architecture before we go onto answering your question. The Fourier Neural Operator (FNO) architecture, which serves as the backbone for the Tensor-GaLore approach, inherently involves tensor-structured weights. As described in the paper:\\n\\n>\\\"In an FNO, the spectral convolution layer contracts a weight tensor R \\u2208 C^(N1 x N2 x N3 x N4) with functions in the Fourier domain: (Kv^l)(x) = F^-1(R \\u00b7 TK Fv^l)(x), where F and F^-1 are the Fourier transform and its inverse, R is a learnable transformation parameterized by the weight tensor introduced above, and TK truncates to the lowest K Fourier modes.\\\"\\n\\nSo the core of the FNO architecture involves a 4th-order tensor R that represents the learnable transformation in the Fourier domain. This tensor structure is crucial for capturing the complex, multidimensional relationships in scientific computing applications like PDEs. In the Tensor-GaLore implementation, we use the popular TensorLy library, which provides a well-tested and efficient implementation of the Tucker decomposition. Specifically, we use the tucker function from the tensorly.decomposition module to compute the Tucker factors\\n\\n```import torch\\nfrom tensorly.decomposition import tucker\", \"class_tensorgalore\": \"def __init__(self, rank):\\n self.rank = rank\\n def project_gradient(self, full_rank_gradient):\\n # Compute the Tucker decomposition of the full-rank gradient\\n core, factors = tucker(full_rank_gradient, rank=self.rank)\\n # Project the gradient onto the low-rank subspace\\n low_rank_gradient = tenalg.multi_mode_dot(full_rank_gradient, factors, transpose=True)\\n return low_rank_gradient\\n def project_back(self, low_rank_gradient):\\n # Compute the inverse projection from the low-rank subspace\\n full_rank_gradient = tenalg.multi_mode_dot(low_rank_gradient, factors)\\n return full_rank_gradient\\n```\", \"the_key_steps_are\": \"1. Compute the Tucker decomposition of the full-rank gradient tensor using the tucker function from TensorLy.\\n\\n2. Project the full-rank gradient onto the low-rank subspace by performing the multi_mode_dot operation with the Tucker factor matrices.\\n\\n3. To update the model parameters, project the low-rank gradient back to the full-rank space using the inverse multi_mode_dot operation.\\n\\nBy using the Tucker decomposition, Tensor-GaLore avoids the need for SVD computations, as required in the original matrix-based GaLore approach. We also present the codebase here (as we had linked in the original paper as well in the footnote): https://anonymous.4open.science/r/tensorgalore/tensor_galore/tensor_galore_projector.py . We also mention that we use the FNO models and implementations from the neuraloperator/neuraloperator library, which provides a unified codebase for working with neural operators, including the ability to represent the weight tensors in a tensorized form. Please let us know if you have any follow up questions regarding this answer.\\n\\n\\n*Lastly, if you are happy with the revised version which includes a detailed theory of our approach as well as we have answered all of your questions in detail, it would be great if you could increase the score: ) We would be happy to answer any follow-up questions you have or weakness that concern you. Thank you once again for reviewing our paper.*\"}", "{\"summary\": \"This work presents Tensor-GaLore, an algorithm that leverages low-rank tensor decomposition on the gradients of tensorized weights. This work is built on top of the previous work (GaLore), which applies low-rank factorization (SVD) on the gradients. Experimental results show that applying it Fourier Neural Operators yield better memory usage and accuracy for numerical PDE problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The quality of the presentation is good. The work is clear and easy to follow.\\n2. The idea of using Tucker decomposition to perform low-rank approximation makes sense for numerical PDE problems, and experimental results verify that.\", \"weaknesses\": \"Despite being clear and effective, I believe the work has limited novelty. The tensor-GaLore approach has limited difference compared to GaLore. In addition, only empirical rather than theoretical results is provided to show the efficacy of the algorithm.\", \"questions\": \"please see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer mEXm\", \"comment\": \"We are writing to kindly remind you that we posted our response 6 days ago. If you have any additional feedback, concerns, or questions regarding our response, we would greatly appreciate hearing from you.\"}", "{\"metareview\": \"The paper proposes to use tensor factorization for gradients (instead of matrix factorization in the original GaLoRe paper) for applications for Fourier Neural Operators, showing memory savings.\\nThe savings for the memory are not dramatic (i.e. 68 gigabytes -> 55 gigabytes) and for some cases there is a drop in the accuracy. \\nMoreover, the usage of the memory can be reduced by using other techniques, such as checkpointing and quantization, and there is no strict need to use more complicated approaches for this particular task. There is also a slowdown effect for some of the parameters (again, should be compared to other memory footprint reduction techniques). \\nThe only modification is the generalization to the Tucker decomposition case (which is rather straightforward) and \\na specific application to FNO. The authors addressed some of the concerns in the rebuttal by adding new material and theoretical experiments, but I think it is not enough.\", \"additional_comments_on_reviewer_discussion\": \"There was a discussion between the authors and reviewer oxLj, who stated reasonable concerns. Some of them were answered, but the questions regarding the theoretical part remained opened, especially in the comparison between GaLoRe and its tensor version.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We are writing to kindly remind you that we posted our response 2 weeks ago. If you have any additional feedback, concerns, or questions regarding our response, we would greatly appreciate hearing from you.\"}", "{\"summary\": \"This paper presents a modification of the GaLora method that allows to update the weights not directly, but in a low-parameter space. The authors present a modification of this method that uses a low rank tensor decomposition, namely the Tucker decomposition, instead of a low rank matrix decomposition. This approach is applied to neural operators for solving PDEs, where 4-way tensors arise naturally.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper uses a low-rank tensor decomposition, which preserves the original multidimensional structure\", \"Some experiments show the effectiveness of this technique\", \"Clearly presented paper\"], \"weaknesses\": [\"No code provided to reproduce the results\", \"No theoretical analysis (in the original GaLora paper there are theoretical justification of low-rank structure, convergence, etc.)\", \"The only significant change (other than technical changes) from the original GaLora method is the use of Tucker Decomposition for 4-way tensor instead of low-rank matrix decomposition\", \"The presented method shows good results only on the Darcy flow equation, in the other experiments the improvement is not strong\", \"Since this paper describes improvements to the GaLora method (which has been published previously), it would be fair to compare against it rather than baseline. For example, in Table 2 for the Darcy equation the presented method is 48.8% better than baseline, while the regular GaLora is 19% better than baseline. Thus, the improvement presented in this paper is a 25% improvement.\", \"Overall, this paper is incremental to the original GaLora paper, without theoretical evaluations (which were in the original paper) and with inconclusive numerical results.\", \"Minor\", \"L458-459 \\\"On Darcy flow (as shown in Table 6)\\\" should be \\\"Table 2\\\"\", \"L397 word \\\"Table\\\" is missing\"], \"questions\": [\"Can one pick the efficient rank ratio in advance?\", \"How is it that for Burgers Equation in Table 2 test loss is much (by an order of magnitude) less than train loss?\", \"Have you tried using other low-rank tensor decompositions (CANDECOMP/PARAFAC, Tensor-train, etc.)?\", \"In Algorithm 1\", \"is $r$ is rank of rank ratio?\", \"tensor $\\\\mathcal{M}_0$ (with $\\\\mathcal V_0$) has the same shape as $\\\\mathcal W\\\\in\\\\mathbb{C}^{N_1\\\\times N_2\\\\times N_3\\\\times N_4}$. Should it be $\\\\mathcal M_0\\\\in\\\\mathbb{C}^{R_1\\\\times R_2\\\\times R_3\\\\times R_4}$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer oxLj (Part 4)\", \"comment\": \"**Have you tried using other low-rank tensor decompositions (CANDECOMP/PARAFAC, Tensor-train, etc.)?**\\n\\n**Answer**: Thank you for this important question about alternative tensor decomposition approaches. While techniques like CANDECOMP/PARAFAC (CP) and Tensor-Train (TT) decompositions are powerful for parameter compression, they are fundamentally less suitable for our gradient optimization objective. The key insight is that Tensor-GaLore is not trying to learn a compressed representation of the parameters themselves, but rather to efficiently project gradients into and out of a learned latent space during optimization. Tucker decomposition is uniquely suited for this purpose because of two critical properties:\\n\\n1. The orthogonality of its factor matrices enables simple and numerically stable projection operations - we can project into the low-rank space using U^T and back using U without computing complex inverses\\n\\n2. The factor matrices provide natural bases for the subspace of each mode independently, allowing us to preserve the multi-dimensional structure that we prove is crucial for neural operator training and we prove the theory behind it.\\n\\nIn contrast, while CP and TT decompositions excel at parameter compression, they don't provide a natural way to project tensors to and from their latent spaces. Their factors are not orthogonal and the reconstructions involve more complex operations that could introduce numerical instabilities in the optimization process. Our theoretical analysis (Section 3.2) shows why preserving mode-wise structure through orthogonal projections is fundamental to the convergence guarantees of Tensor-GaLore. Thus, Tucker decomposition's mathematical properties align perfectly with our gradient optimization objectives in a way that other tensor factorizations don't.\\n\\n**In Algorithm 1is $r$ is rank of rank ratio?**\\n\\n**Answer:** Yes, that is correct! You can either pass the rank ratio or the rank as an integer or a list of integers (each one corresponding to a dimension).\\n\\n**Tensor $\\\\mathcal{M}_0$ (with $\\\\mathcal V_0$) has the same shape as $\\\\mathcal W\\\\in\\\\mathbb{C}^{N_1\\\\times N_2\\\\times N_3\\\\times N_4}$. Should it be $\\\\mathcal M_0\\\\in\\\\mathbb{C}^{R_1\\\\times R_2\\\\times R_3\\\\times R_4}$?**\\n\\n**Answer:** Thank you for the question! It shouldn't be that but rather as it is. This is because if the original Weights are complex then we would have complex associated gradients and moments. Since we are considering complex-valued weights $\\\\mathcal W$ in the Fourier Neural Operator setting, the gradients and optimizer moment tensors should also be complex-valued. Hence why we updated the Adam optimizer to include the updated complex conjugate update instead of just doing the real squared |Rt \\u00af Rt|. \\n\\nThere is also a lengthy discussion of this update on Adam Adam Optimizer Implemented Incorrectly for Complex Tensors \\u00b7 https://github.com/pytorch/pytorch/issues/59998. In the case for the EM dataset and other complex datasets, we found it much necessary to use the correct complex update for Adam rather than the ones they have in pytorch for convergence. If the weights $\\\\mathcal W$ happen to be real-valued, then the moment tensors can also be real-valued, and the expressions can be simplified accordingly. But the general case should account for the complex-valued nature of the tensors involved.\\n\\n**Overall, this paper is incremental to the original GaLora paper, without theoretical evaluations (which were in the original paper) and with inconclusive numerical results.**\\n\\n**Answer**: Now that we have updated the paper with a detailed theory + ablation studies we want to argue that our work is not incremental. Tensor-GaLore represents a fundamental advancement over GaLore, supported by both comprehensive theoretical analysis and strong empirical results. Our paper develops a complete theoretical framework from first principles - establishing tensor operations, proving FNO reversibility, and providing explicit convergence guarantees for tensor gradient optimization. We prove that gradients naturally develop mode-specific low-rank structures with bounded stable rank evolution, explaining why preserving tensor structure through Tucker decomposition is fundamentally more suitable than matrix approaches. This theory is validated by our empirical results - while matrix-based GaLore significantly degrades performance on complex tasks like Navier-Stokes (-223%) due to collapsing multi-dimensional relationships, Tensor-GaLore maintains or improves performance while reducing memory by up to 75%. Also please check the general response for more details.\\n\\n*Lastly, if you are happy with the revised version which includes the theory that you requested as well as answered all your questions in detail, it would be great if you could increase the score: ) We would be happy to answer any follow-up questions you have or weakness that concern you. Thank you once again for reviewing our paper.*\"}", "{\"title\": \"Response to Reviewer RFRZ\", \"comment\": \"**Despite being clear and effective, I believe the work has limited novelty. The tensor-GaLore approach has limited differences compared to GaLore. In addition, only empirical rather than theoretical results is provided to show the efficacy of the algorithm.**\\n\\n**Answer:** Thank you for the feedback on the novelty and scope of our work. You raise a fair point that the Tensor-GaLore approach may have limited novelty compared to the original GaLore method, as it primarily extends the core idea to handle tensor-structured weights. We acknowledge that the fundamental principle of projecting gradients onto low-rank subspaces is shared between the two methods. However, the technical challenges and innovations required to apply this concept to tensor-weight models effectively are non-trivial, as we discussed in the previous response also please look at our response to Reviewer po1k and the general response. We also mention why this is not trivial in the introduction in detail. \\n\\nNow, to answer your next question, we have added extensive theory to the revised version. Please take a look at it. Our paper now develops a complete theoretical framework starting from first principles. We begin by establishing fundamental tensor operations and notation (Section H), including rigorous definitions of tensor products, traces, norms, and inner products. This mathematical foundation is crucial for analyzing tensor-structured neural networks. Building on this, we prove the reversibility of Fourier Neural Operators (Section I) by systematically analyzing each component - spectral layers, MLP layers, and activation functions - and showing how their compositions maintain reversibility properties. This reversibility analysis provides crucial insights into the gradient structure that enables our tensor-based optimization approach.\\n\\nWith these foundations established, we then prove our main theoretical results for Tensor-GaLore, showing both convergence guarantees and the natural emergence of low-rank structure during training. We prove that gradient tensors develop mode-wise low-rank structure under mild conditions and establish explicit bounds on the stable rank evolution. The theoretical framework explains why working directly with tensors through Tucker decomposition is fundamentally more suitable than matrix-based approaches like GaLore, which force tensor parameters into matrix form. Our empirical results strongly validate these theoretical insights - Tensor-GaLore maintains or improves performance while achieving significant memory savings, whereas matrix-based GaLore can actually hurt performance on challenging PDE tasks. This combination of rigorous theory from first principles and strong empirical validation demonstrates that Tensor-GaLore represents a significant theoretical and practical advancement in efficiently training neural operators.\\n\\n*Lastly, if you are happy with the revised version which includes the theory that you requested as well as answered your question, it would be great if you could increase the score: ) We would be happy to answer any follow-up questions you have or weakness that concern you. Thank you once again for reviewing our paper.*\"}", "{\"title\": \"Response to Reviewer oxLj\", \"comment\": \"**No code provided to reproduce the results**\\n\\n**Answer:** We want to point out that we indeed have provided the code to reproduce the results present in the paper here (We also had this in the original version before the revision as a footnote, but will mention it again for clarity): https://anonymous.4open.science/r/tensorgalore/tensor_galore/tensor_galore_projector.py.\\n\\n**No theoretical analysis (in the original GaLora paper there are theoretical justification of low-rank structure, convergence, etc.)**\\n\\n**Answer.** We thank the reviewer for their concern of the lack of theoretical analysis. We have uploaded a revised version with detailed theoretical analysis that goes significantly beyond the original GaLore paper. Our theoretical framework starts from first principles, introducing rigorous tensor operations and notation (Section H), then proves the reversibility of FNO components (Section I), before establishing our main theoretical results (Section J). Specifically, we prove both convergence guarantees for Tensor-GaLore and characterize how gradients naturally develop low-rank structure in tensor space for FNOs. We show that under mode-k continuity conditions and prove convergence of Tensor-GaLore. Additionally, we establish a special structure for tensor gradients. We hope that this comprehensive theoretical analysis explains why tensor-based optimization is fundamentally more suitable than matrix approaches for neural operators and provides rigorous justification for our empirical observations. \\n\\n**The only significant change (other than technical changes) from the original GaLora method is the use of Tucker Decomposition for 4-way tensor instead of low-rank matrix decomposition**\\n\\n**Answer**. Again we thank the reviewer for bringing up this point. First, it is important to emphasize the critical role of tensor-structured weights in FNOs, the key focus of our work. FNOs are a class of neural network architectures designed to learn complex, multidimensional mappings between function spaces, which are crucial for solving parametric partial differential equations (PDEs). The weight tensors in FNOs can have orders as high as 4 or 5 (, capturing intricate relationships between spatial, temporal, and channel dimensions. Also this method is not only suited for FNOs but any sort of tensor-based models and this is scope of future works where we plan to try it on tensorized LLMs, Quantum networks with tensor weights etc.\\n\\nEffectively handling these high-order tensor weights is essential for the success of FNOs in scientific computing applications. Flattening these tensors into matrices, as done in the matrix-based GaLore approach, can lead to a significant loss of important dimension-specific information, compromising the model's ability to capture the underlying physical phenomena. This is a key limitation that Tensor-GaLore aims to address, and now, as requested in the new revised version, we have a detailed theory for it. Furthermore, please look at the general response to the question as well.\"}", "{\"comment\": \"We are writing to kindly remind you that we posted our response 2 weeks ago. If you have any additional feedback, concerns, or questions regarding our response, we would greatly appreciate hearing from you.\"}", "{\"title\": \"Response to Reviewer mEXm\", \"comment\": \"**The authors discuss that TensorGalore is superior to Galore as it avoids SVDs. Nevertheless, the computation of tensor formats (Tucker, Tensor Train, etc.) relies on matricizations of the tensor and then typically singular value decomposition of those. So the SVD is still at the heart of TensorGalore.**\\n\\n**Answer:** We thank the reviewer for their point. While Tensor-GaLore aims to avoid the limitations of the matrix-based approach in GaLore, the tensor decomposition techniques it relies on, such as Tucker decomposition, still fundamentally involve Singular Value Decomposition (SVD) computations in the background.\\n\\nThe key difference is that Tensor-GaLore leverages the Tucker decomposition, which preserves the multidimensional structure of the tensors, unlike the flattening approach used in GaLore. This is a crucial advantage, as it allows Tensor-GaLore to better capture the complex, high-dimensional relationships present in tensor-based models like Fourier Neural Operators.\", \"the_reason_tensor_galore_specifically_chooses_tucker_decomposition_is_that_it_offers_several_important_properties_that_make_it_well_suited_for_the_task\": \"1. Equivalence to SVD in the matrix case: As mentioned in the paper, the Tucker decomposition reduces to the familiar SVD when applied to matrices (2D tensors). This ensures a seamless extension of the GaLore principles to higher-order tensors.\\n\\n2. Orthogonality of factor matrices: The factor matrices in the Tucker decomposition are orthogonal, which enables efficient and numerically stable projection and reconstruction operations. This is crucial for the gradient projection and update steps in Tensor-GaLore.\\n\\n3. Preserving multidimensional structure: Unlike the flattening approach used in GaLore, the Tucker decomposition operates directly on the higher-order tensor, preserving the distinct relationships along each tensor dimension. This aligns well with the multidimensional nature of tensor-based models like Fourier Neural Operators.\\n\\nYou are correct that the Tucker decomposition still relies on SVD computations in the background, as it involves approximating the SVD of the tensor unfoldings along each mode. However, the key advantage of Tensor-GaLore is that it operates on the tensor directly, rather than flattening it into a matrix. This is an important distinction because, as mentioned in the introduction, SVD-based approaches like GaLore tend to discard crucial dimension-specific information when applied to tensors. The SVD computation in the Tucker decomposition is performed in a way that preserves the multidimensional relationships, which is essential for capturing the complex physical phenomena modeled by tensor-based architectures.\\n\\nFurthermore, the tensor decomposition techniques, including Tucker, do not have the same inherent limitations as the matrix SVD when it comes to handling the dimensions of the input data. The SVD in GaLore operates on a flattened matrix, and the rank selection affects all dimensions equally, which can be suboptimal. In contrast, the Tucker decomposition allows for a separate rank parameter along each tensor mode, providing more flexibility in preserving the important information in each dimension.\"}", "{\"comment\": \"I thank the authors for their detailed response. I have carefully read the responses to the other reviewers as well as the updated version of the paper.\\n\\nAt this point, I would like to note that this paper is a rather serious step in the development of tensor-based models in machine learning and deserves attention. Besides, the authors have posted code that allows verifying the results. However, my main concerns were not fully addressed. In particular, the authors mention memory reduction on different equations, but this improvement depends too much on the type of equation and has more of an engineering, technical meaning than a scientific one. I think other various modifications of the original idea of the GaLore algorithm, which is that the updates occur in some low-rank space other than the parameters themselves, are possible, and they would also lead to more efficient operation of the algorithm on selected (not all!) PDEs. But such refinements are rather incremental and are not A* conference level publications. In the updated version of the paper, an appendix has been added to introduce new concepts from tensor algebras and some similarity theorems. However, many of these results are rather trivial and do not prove the efficiency of the particular method presented. I have not found any proof that Tensor-GaLore will be better than GaLore (perhaps, under some given conditions). I recommend to transfer the basic, new theoretical results to the main text of the paper.\\n\\n\\nAmong minor remarks, I never understood the authors' comment about the dimensions of $\\\\mathcal{M}_t$ and $\\\\mathcal V_t$. My comment was related to the fact that the original GaLore algorithm implies updating gradients in a low-parametric space than the original one. In this case, the dimension of the variables to be updated, i.e., $\\\\mathcal{M}_t$ and $\\\\mathcal V_t$, would have to be (much) smaller than the dimension of the original weight space $\\\\mathcal W$. In the Alg. 1, L240-244 they are the same. If the authors' answer is correct, then I don't understand at all what is the dimensionality of that low-parametric space where the updates occur, what variables in the Algorithm correspond to it. However, the authors ignored this point, answering something about technical difficulties with complex numbers, which are in torch. This convinced me even more that the paper is largely technical rather than containing breakthrough fundamental ideas.\\n\\nOverall, i keep my score.\"}", "{\"title\": \"Response to Reviewer oxLj\", \"comment\": \"We are writing to kindly remind you that we posted our response 6 days ago. If you have any additional feedback, concerns, or questions regarding our response, we would greatly appreciate hearing from you.\"}", "{\"summary\": \"This paper extends GaLore [Zhao 2024] to neural networks with tensor weights by adding Tucker Decomposition and performing low-rank projection directly on tensor gradients. The experiment compares the proposed method to vanilla GaLora (with reshaping) on Fourier Neural Operators (FNOs), a class of tensor-weight models for solving partial differential equations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper appears to tackle a gap that has received little attention by the literature, the efficient training of tensor-weight models, with the only prior works being [Kossaifi 2024] and [George 2024].\\n\\nThe paper is generally well-written and easy to follow, with a clear story.\", \"weaknesses\": \"Despite the novel application, the approach is a somewhat straight-forward extension of GaLore to tensor-weight models, replacing SVD decomposition with Tucker.\\n\\nThere is a lack of discussion on the slowdown in training given the overhead.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer RFRZ\", \"comment\": \"We are writing to kindly remind you that we posted our response 6 days ago. If you have any additional feedback, concerns, or questions regarding our response, we would greatly appreciate hearing from you.\"}", "{\"comment\": \"We are writing to kindly remind you that we posted our response 2 weeks ago. If you have any additional feedback, concerns, or questions regarding our response, we would greatly appreciate hearing from you.\"}", "{\"title\": \"Response to Reviewer oxLj (Part 2)\", \"comment\": \"1. **but this improvement depends too much on the type of equation and has more of an engineering, technical meaning than a scientific one. I think other various modifications of the original idea of the GaLore algorithm, which is that the updates occur in some low-rank space other than the parameters themselves, are possible, and they would also lead to more efficient operation of the algorithm on selected (not all!) PDEs.**\\n\\n**Answer**: While we agree to a certain bit with the reviewer, we want to again clarify some main points. It's well-established in PDE theory that different equation classes have inherently different complexities and solution structures. Some PDEs are elliptic, others parabolic or hyperbolic, each with distinct mathematical properties. These differences aren't just \\\"engineering\\\" variations - they reflect fundamental mathematical distinctions. Hence when developing neural operator methods for PDEs, the architecture must respect the underlying mathematical structure of the problem. For example FNOs naturally handle spectral properties while Graph Neural Operators better suit mesh-based problems or non-uniform meshes.\\n\\nSecondly while performance varies across PDEs, our theoretical contributions about tensor gradient structure are universal for any tensor networks with tensor weights. Reversibility of those models depends on the structure but our proofs can be adapted to a wide variety of tensor networks.\\n\\n2. **Among minor remarks, I never understood the authors' comment about the dimensions of \\n and. My comment was related to the fact that the original GaLore algorithm implies updating gradients in a low-parametric space than the original one. In this case, the dimension of the variables to be updated, i.e., \\n and , would have to be (much) smaller than the dimension of the original weight space \\n. In the Alg. 1, L240-244 they are the same. If the authors' answer is correct, then I don't understand at all what is the dimensionality of that low-parametric space where the updates occur, what variables in the Algorithm correspond to it.**\\n\\n**Answer**: We apologise for the confusion and misunderstanding. You are correct, it is indeed what you had suggested ie $\\\\mathcal{M}_0 \\\\in \\\\mathbb{C}^{r \\\\times r \\\\times r \\\\times r}$ if using the same rank $r$ for all dimension else it would be $R_1 \\\\times R_2 \\\\times R_3 \\\\times R_4$ if choosing different ranks for each dimension. We talked about the complex case cause we had gotten confused in the superscript if those $R's$ meant the reals and hence why we switched to talking about the complex. We thank the reviewer for again bringing this out and have updated the pseudocode for both GaLore and the tensor-Galore and will update it once revisions are allowed.\\n\\n$\\\\State \\\\text{Initialize first-order moment} \\\\mathcal{M}_0 \\\\in \\\\mathbb{C}^{r \\\\times r \\\\times r \\\\times r} \\\\gets 0$\\n\\n$\\\\State \\\\text{Initialize second-order moment} \\\\mathcal{V}_0 \\\\in \\\\mathbb{C}^{r \\\\times r \\\\times r \\\\times r} \\\\gets 0$\\n\\n\\nLastly, we want to again ask the reviewer to please go over the comments and general response to see that this paper is technical but also contains a lot of theory bridging the gap between theory and efficiency for tensor based structures. This is the first work where we mathematically prove that tensor gradients go into low-rank and we prove convergence for the FNO case. \\n\\nWe would be happy to answer any follow up questions! Thank you once again for reviewing our paper.\"}", "{\"title\": \"General Response (Part 2)\", \"comment\": \"We want to thank all the reviewers for their review of our paper. So far, we haven't gotten a single review back :(, and we have answered all the questions in detail and now included a detailed theoretical section of our work, which was missing before. **With the discussion period ending soon on Nov 26, 2024 (Anywhere on Earth), we kindly await your feedback or an updated assessment of our paper**. Please let us know if your concerns have been satisfactorily addressed; if so, we would greatly appreciate it if you could update your ratings. We are available and would be happy to answer any additional questions. Thank you once again!\"}", "{\"title\": \"General summary of our work for the decision\", \"comment\": \"Thank you to all the reviewers for the thoughtful and insightful feedback on our paper. We greatly appreciate the time and effort you have put into reviewing our work. We want to summarize the whole work and discussion since today is the last day we can respond. Although we are disappointed that 3 of the reviewers haven't responded to us since their comments, we are hopeful that the AC can take into the consideration of all the answers and revisions we made to make the final decision. To make it easier we include a summary of our revisions as well and the inclusion of the theoretical work.\\n\\n# [New] Theoretical Contributions\\n\\n## 1. Comprehensive Theory of Tensor-GaLore\", \"we_have_developed_a_complete_theoretical_framework_that_explains_why_and_how_tensor_galore_works\": [\"### Reversibility Analysis\", \"Proved that FNO is reversible when using reversible activations\", \"Showed that spectral layer, MLP layer, and their composition maintain reversibility\", \"This provides the foundation for analyzing gradient structure\", \"### Low-Rank Structure\", \"Proved that gradients naturally develop mode-wise low-rank structure during training\", \"Showed that each tensor mode can have different rank behavior:\", \"Fourier modes (3,4) exhibit natural spectral decay\", \"Channel modes (1,2) maintain information structure\", \"Demonstrated that this emergent structure enables efficient compression\", \"### Convergence Guarantees\", \"Proved convergence under mild mode-k continuity conditions\", \"Showed that fixed projection matrices are sufficient\", \"Established explicit bounds on convergence rate for each mode\", \"## 2. Why Tensor-GaLore Outperforms GaLore\", \"The final lemma (Lemma 11) provides a crucial theoretical justification for Tensor-GaLore's superiority:\", \"1. **Mode-wise Independence**:\", \"Tensor-GaLore achieves low rank in all modes simultaneously\", \"GaLore cannot preserve low-rank structure across all modes due to matricization\", \"2. **Structural Preservation**:\", \"Tensor-GaLore maintains natural tensor structure of FNO weights\", \"GaLore's matricization scrambles important multi-dimensional relationships\", \"3. **Optimal Compression**:\", \"Tensor-GaLore: All modes k satisfy sr_k(G_t) \\u2264 N_k/2 asymptotically\", \"GaLore: At least one mode k must maintain sr_k(G_t) \\u2265 min(N_k/2, N')\", \"# Practical Improvements\", \"## 1. Memory Efficiency\", \"Achieved up to 75% reduction in optimizer memory usage\", \"Demonstrated scalability to high-resolution problems (1024\\u00d71024 Navier-Stokes)\", \"Enabled training of larger models that were previously infeasible\", \"## 2. Performance Gains\", \"Maintained or improved model accuracy across all tested PDEs\", \"Showed implicit regularization benefits from tensor structure preservation\", \"Demonstrated better generalization in high-resolution settings\", \"## 3. Implementation Optimizations\", \"Introduced \\\"warm-restart\\\" initialization for tensor decomposition\", \"Developed efficient mode-wise projection updates\", \"Carefully balanced computational overhead vs. memory savings\", \"# Broader Impact\", \"Our work has great implications across scientific computing, methodological advancement, and practical applications. In scientific computing, Tensor-GaLore enables the training of larger, more accurate FNOs while substantially reducing computational resource requirements, making advanced scientific modeling more accessible to researchers with limited resources (This is the key point we want to emphasize as academic labs/independent researchers do not have access to high-end GPU clusters). Our methodological contributions introduce novel techniques for tensor optimization and provide a comprehensive theoretical framework for analyzing tensor methods, effectively bridging the gap between matrix and tensor algorithms for the decomposition of gradients. These advances have direct practical applications in critical areas such as high-resolution climate modeling, precise fluid dynamics simulations, and electromagnetic wave propagation modeling.\", \"# Summary of Changes after the discussions and the new revised version\", \"1. **Theoretical Extensions**:\", \"Added complete proofs of all theorems\", \"Expanded mode-k continuity analysis\", \"Enhanced tensor operation definitions and properties\", \"2. **Technical Clarifications**:\", \"Improved explanation of reversibility conditions\", \"Added detailed analysis of computational complexity\", \"Enhanced discussion of tensor rank properties\", \"3. **Additional Results**:\", \"Extended experimental validation\", \"Added ablation studies\", \"Included more detailed memory analysis\", \"To conclude, we believe that this work represents a significant advancement in both theoretical understanding and practical implementation of memory-efficient training for tensor-based neural networks, especially neural operators (FNO), which are particularly important for scientific machine learning applications. We also provide code to reproduce all our experiments and detail where to get the datasets.\"]}", "{\"title\": \"Response to Reviewer po1K\", \"comment\": \"We are writing to kindly remind you that we posted our response 6 days ago. If you have any additional feedback, concerns, or questions regarding our response, we would greatly appreciate hearing from you.\"}" ] }
C81bqFCmMf
COMET: Benchmark for Comprehensive Biological Multi-omics Evaluation Tasks and Language Models
[ "Yuchen Ren", "Wenwei Han", "Qianyuan Zhang", "Yining Tang", "Weiqiang Bai", "Yuchen Cai", "Lifeng Qiao", "Hao Jiang", "Dong Yuan", "Tao Chen", "Siqi Sun", "Pan Tan", "Wanli Ouyang", "Nanqing Dong", "Xinzhu Ma", "Peng Ye" ]
As key elements within the central dogma, DNA, RNA, and proteins play crucial roles in maintaining life by guaranteeing accurate genetic expression and implementation. Although research on these molecules has profoundly impacted fields like medicine, agriculture, and industry, the diversity of machine learning approaches—from traditional statistical methods to deep learning models and large language models—poses challenges for researchers in choosing the most suitable models for specific tasks, especially for cross-omics and multi-omics tasks due to the lack of comprehensive benchmarks. To address this, we introduce the first comprehensive multi-omics benchmark COMET (Benchmark for Biological **CO**mprehensive **M**ulti-omics **E**valuation **T**asks and Language Models), designed to evaluate models across single-omics, cross-omics, and multi-omics tasks. First, we curate and develop a diverse collection of downstream tasks and datasets covering key structural and functional aspects in DNA, RNA, and proteins, including tasks that span multiple omics levels. Then, we evaluate existing foundational language models for DNA, RNA, and proteins, as well as the newly proposed multi-omics method, offering valuable insights into their performance in integrating and analyzing data from different biological modalities. This benchmark aims to define critical issues in multi-omics research and guide future directions, ultimately promoting advancements in understanding biological processes through integrated and different omics data analysis.
[ "Multi-omics Benchmark", "AI for Biology", "Language Models" ]
Reject
https://openreview.net/pdf?id=C81bqFCmMf
https://openreview.net/forum?id=C81bqFCmMf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zcvNQXKP80", "zQql7TG8R6", "xeVH2ftHms", "xEY22wanY3", "to4mJyJRLN", "tQvOQCaeE1", "rcvoe6CIWX", "nTnyIDDWff", "lKnfesRopg", "l85GC9gHFX", "kNCmB4mZk1", "kH5QGX12fY", "jdWpSp308o", "jGtip2vf1m", "jGHkikiFwv", "hIHDwTqiBm", "emDERjB2sD", "dpSu0PRnrm", "dBZQ6MwDuH", "bp4r8esRCX", "b9cI04UQrR", "Z3VkzZUWe6", "XRgZHA1LAv", "Uw2igQLr8o", "URN9NDHDwF", "Rgs9FcRK71", "QCyRNHDtou", "Q4wNtNW2VZ", "Piyauxmr7o", "PWlXjnxXsk", "MXLHvaSTjm", "LP6Do4fzJH", "KL6im3FHQL", "JmLBClC65m", "JVNgjxCAMa", "HzgZfgvzMQ", "GfCdBtGSgP", "E2qfRz3oCj", "Av4xq0E6f1", "8MHQcED4ys", "6yHULukKUd", "5ywjPyI38t", "4FjH5pYIRy", "43ZTEqnEuJ", "3zRLJm5F2j", "3hglZWTuMw", "2lhSQzwXfr", "2JwMonsiL3", "0Fw71iatnT" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732613038120, 1732418005335, 1733220643617, 1730045434925, 1732429603414, 1730647104657, 1734749266425, 1733077099864, 1732428717537, 1737523445065, 1732731808960, 1730280394484, 1732814223154, 1732429215978, 1732612642354, 1732430060235, 1732454984410, 1733145711159, 1732428831919, 1732428792562, 1733221566983, 1733221176741, 1732429359283, 1732390928966, 1732429456653, 1732732698355, 1732428340661, 1733166163822, 1732428981567, 1732472075695, 1733076039866, 1733220879618, 1732428860643, 1732391104455, 1732428079709, 1732428456833, 1732430455544, 1732613058667, 1732430627372, 1733075967254, 1733221334910, 1730510426720, 1732418140150, 1732732198346, 1732564464571, 1732613067610, 1732429648235, 1732740380068, 1732612979452 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_6UP5" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_8ScB" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_6UP5" ], [ "ICLR.cc/2025/Conference/Submission1282/Area_Chair_EgCW" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_z8kJ" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_8ScB" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_6UP5" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_8ScB" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_8ScB" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_TFBg" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_6UP5" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_TFBg" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_8ScB" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_z8kJ" ], [ "ICLR.cc/2025/Conference/Submission1282/Authors" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_TFBg" ], [ "ICLR.cc/2025/Conference/Submission1282/Reviewer_6UP5" ] ], "structured_content_str": [ "{\"comment\": \"Than you for addressing my comments\"}", "{\"title\": \"Answer to Reviewer 6UP5\", \"comment\": \"We sincerely appreciate your thoughtful and constructive feedback. Below, we address each of the identified weaknesses and questions in detail.\\n\\n\\n### **Question1: Criterial For Choosing Models** \\n\\nWe outline the criteria for selecting models. Through a comprehensive review of comparative studies in the literature, we evaluated models across various omics tasks. For each omics domain, we selected the two top-performing models due to computational constraints. Furthermore, all of the model selections are open-source and reproducible.\\n\\n- For DNA-related tasks, HyenaDNA, a convolution-based long-sequence model, demonstrated performance comparable to other pretrained DNA language models in DNA generation and understanding tasks. However, based on findings from sources such as BEND [r1] and Genomics-FM [r2], we identified DNABERT2 and NTv2-multispecies as the models that achieved the best performance across most tasks. Therefore, we selected DNABERT2 and NTv2-M for DNA omics.\\n- In RNA omics, taking into account the excellent performance and the complete open-source nature, we referred to evaluations from the BEACON [r4] benchmark and selected RNA-FM and BEACON-B as the top-performing models.\\n- For proteomics, results from PEER [r5] and ProteinGym [r6] led us to initially focus on the ESM (Evolutionary Scale Modeling) family of models. Specifically, we selected ESM-1b for testing. Following the release of updated versions in the ESM series, we included ESM-2 as an additional test model.\\n\\nAt the time of our experiments, LucaOne was the only available multi-omics language model. Consequently, LucaOne was chosen for tasks involving multi-omics.\\n\\n[r1] BEND: Benchmarking DNA Language Models on biologically meaningful tasks. ICLR 2024.\\n\\n[r2] Genomics-FM: Universal Foundation Model for Versatile and Data-Efficient Functional Genomic Analysis. bioRxiv 2024.\\n\\n[r3] BEACON: Benchmark for Comprehensive RNA Tasks and Language Models. NeurIPS 2024.\\n\\n[r4] PEER: A Comprehensive and Multi-Task Benchmark for Protein Sequence Understanding. NeurIPS 2022.\\n\\n[r5] ProteinGym: Large-Scale Benchmarks for Protein Design and Fitness Prediction. NeurIPS 2023.\"}", "{\"title\": \"Finetuning schemes and data nature/quality\", \"comment\": \"**1. Finetuning Schemes**\\nTo further address your suggestions on fine-tuning strategies, we add additional experiments to explore the impact of lora and frozen. For comprehensive evaluation, we select one task from DNA, RNA, and protein for each experiment.\\n\\n- We observed that in most cases, the performance of the LoRA fine-tuning strategy is better than frozen and close to all. This shows that LoRa as a PEFT method can achieve relatively good results at a low cost, and further proves the rationality of our experimental setting.\\n- On the other hand, we found that frozen had less loss in protein tasks than lora. We speculate that this may be because people have studied protein pre-training models more thoroughly, and the pre-training weights contain more possibly correct information. It also shows that there is still great potential for the development of DNA and RNA pre-training models.\\n\\n\\nWe hope that additional experiments will make the entire article more rigorous.\\n\\n| Model | Enhancer Activity(Dev) | | | Enhancer Activity(Hk) | | | Programable RNA | | | Thermostability | | |\\n|:--------:|:----------------------:|:------:|:-----:|:---------------------:|:------:|:-----:|:-------------:|:------:|:-----:|:---------------:|:------:|:-----:|\\n| | all | frozen | lora | all | frozen | lora | all | frozen | lora | all | frozen | lora |\\n| DNABERT2 | 68.22 | 39.44 | 66.32 | 77.43 | 41.75 | 75.93 | 54.79 | 19.99 | 54.24 | 16.54 | 60.94 | 53.27 |\\n| NTv2 | 66.2 | 29.36 | 65.98 | 76.51 | 28.89 | 75.85 | 55.27 | 23.09 | 54.86 | 60.19 | 60.95 | 57.42 |\\n| RNA-FM | 68.87 | 36.31 | 69.03 | 77.76 | 37.86 | 77.38 | 55.98 | 20.05 | 57.36 | 55.31 | 56.8 | 40.32 |\\n| BEACON-B | 66.05 | 34.57 | 66.86 | 76.31 | 38.63 | 76.39 | 54.67 | 25.73 | 52.21 | 60.76 | 57.18 | 50.98 |\\n| ESM-1b | 62.21 | 51.71 | 67.06 | 73.81 | 63.31 | 76.54 | 54.42 | 52.29 | 56.32 | 70.94 | 69.83 | 70.58 |\\n| ESM-2 | 68.04 | 43.27 | 68.82 | 77.03 | 54.73 | 77.5 | 56.27 | 35.51 | 56.94 | 69.36 | 63.02 | 65.87 |\\n\\n**2 Data Nature of Tasks**\\n* Gene Expression\\n We adopt the data processing methodology from Xpresso[r1]. Human gene expression data comes from the Epigenomics Roadmap Consortium, which provides normalized RNA-seq values for protein-coding mRNAs across 56 tissues and cell lines.\\n \\n Due to the large number of parameters in biological language models and the memory limitations of A100 GPUs, our experiments show that trimming sequence lengths to 6000 bp ensures compatibility with all models for processing input sequences. By inputting consecutive 6000 bp nucleotide fragments from different positions in the processed sequences into the Xpresso model, we identify that the sequence indexed from position 7000 to 12999 (length 6000 bp) achieves optimal test performance. This segment contains the most information related to gene expression levels.\\n \\n For training, we use the 6000 bp nucleotide sequence indexed from position 7000 to 12999 as input and the expression data for 56 tissues as labels. The train, validation, and test dataset splits follow the methodology used in Xpresso.\\n\\n* Enhancer Activity Prediction\\n We follow the processing procedure described in [r2]. The data includes sequence information and transcriptional activity metrics for both Drosophila and humans, encompassing developmental and housekeeping transcriptional activity levels.\\n \\n We use downloaded sequences of 249 bp in length, along with `Dev_log2_enrichment_scaled` and `Hk_log2_enrichment_scaled`, which respectively represent developmental and housekeeping transcriptional activity information. The dataset is divided into training, validation, and test sets according to the method outlined in [r2].\\n\\n\\n[r1] Predicting mrna abundance directly from genomic sequence using deep convolutional neural networks. Cell Reports 2020.\\n\\n[r2] DeepSTARR predicts enhancer activity from DNA sequence and enables the de novo design of synthetic enhancers. Nature Genetics 2022.\"}", "{\"summary\": \"This paper proposed a new benchmarking pipeline for biological sequence models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper is well-structured and covers most recent foundation models for biological sequence analysis, which will attract researchers in this field.\", \"weaknesses\": \"Although the pipeline and experiments are interesting, I do have some questions and I believe that the authors overclaim their dataset preparation and pipeline design. Here are my questions:\\n\\n1. In Figure1, the authors mention that the benchmarking resources coming from different databases. However, if we zoom in each task, only one dataset is selected into benchmarking, which is really a biased selection. For similar benchmarking analysis [1,2], most of them will cover diverse datasets for each task. This is also different from a recent publication BEND [3], because the authors here intend to benchmark information like gene expression, which is less constant comparing with tasks like gene finding. Would the authors please justify that their choices are unbiased or consider including more datasets?\\n\\n2. For Table 1, the metrics seem inconsistent. For example, why the authors sometimes select SCC, while other cases are based on PCC or R2? Should all of them be used for benchmarking a regression task? Also, what is the reason of not using PCC for all cases but using SCC for certain task? I cannot understand the reason.\\n\\n3. In Table 2, the authors consider DNABERT2 [4]. However, I am confused about the max token length of DNABERT2. The current version of DNABERT2 can handle DNA sequence with different length, and the token length can be increased. Could the authors use the latest version for benchmarking to make a fair and useful comparison?\\n\\n4. In Table 3, what is the meaning of percentage (%) for PCC, SCC or R2? These metrics can be negative, so what is the meaning of a negative percentage?\\n\\n5. Could the authors ensure that their task-specific method (like Xpresso) is really the optimal solution? Why do the authors not choose Enformer [5] or Borzoi [6] for gene expression prediction? It will be much more helpful to list the sources of task-specific method and the reason to choose them (like based on benchmarking studies, they are top-performer?)\\n\\n6. I am a bit confused from the conclusion of this benchmarking analysis. Would the authors please highlight their discovery in the abstract section and thus the readers can learn from your paper? I think most of the content of abstract and conclusion are presenting experiment design, but other key information, like some tasks need better methods, should be emphasized. \\n\\n7. Furthermore, I wonder if the authors can check if the validation dataset might be used in the pre-training stage of some foundation models. If so, will the benchmarking analysis for the frozen one fair?\\n\\n[1] https://www.nature.com/articles/s41592-019-0690-6\\n\\n[2] https://www.nature.com/articles/s41592-022-01480-9\\n\\n[3] https://arxiv.org/abs/2311.12570\\n\\n[4] https://github.com/MAGICS-LAB/DNABERT_2\\n\\n[5] https://www.nature.com/articles/s41592-021-01252-x\\n\\n[6] https://www.biorxiv.org/content/10.1101/2023.08.30.555582v1\", \"questions\": \"Please see my weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer to Reviewer TFBg\", \"comment\": \"### **Weakness7: Explanation of EC Task**\\n\\nRegarding the performance changes of CaLM on the EC task, we first check whether the fine-tuning was correctly completed. When using the CaLM model for downstream task predictions, we directly utilize the official code and weights(from https://github.com/oxpig/CaLM) and add only a linear layer for adaptation to the downstream task. The code is reviewed by two individuals to ensure its correctness. In order to ensure the reliability of the experimental results, we tried different hyperparameters\\uff0cand the results are shown in the table.\\n\\n\\n| learning_rate | 0.001 | 0.0001 | 0.0005 | 0.00001 | 0.00005 | 0.000001 | 0.000002 | 0.000005 |\\n|:--------------:|:------:|:------:|:------:|:-------:|:-------:|:--------:|:--------:|:--------:|\\n| Frozen Finetuning | 0.7494 | 0.6752 | 0.7337 | 0.4434 | 0.6203 | 0.1431 | 0.1427 | 0.3401 |\\n| Full Parameter Finetuning | 0.1777 | 0.5006 | 0.445 | 0.4423 | 0.489 | 0.3532 | 0.4097 | 0.4335 |\\n\\n\\nDuring the process of adjusting different learning rates, we observed that fine-tuning indeed led to a decline in CaLM's performance on the EC task. We speculate that the possible reason for the performance degradation of CaLM on the EC task could be: \\n\\n- In the other two downstream tasks of proteins, the sequences will involve a large number of variant non-natural sequences, while the sequences of the EC task are mainly natural protein sequences, which are closer to the data distribution learned by pre-training. Fine-tuning all parameters may make it easier for Calm to forget the knowledge learned in pre-training, while the frozen method can slow down the forgetting of Calm pre-training knowledge to a certain extent.\\n\\n\\n### **Weakness8: Using Protein Sequence Better Than Codon Sequence**\\n\\nRegarding the issue of using amino acid sequences outperforming using nucleotide sequences, we find that the Flu task shows little difference when using amino acid sequences versus nucleotide sequences, whereas the Beta-Lac and EC tasks perform significantly better with amino acid sequences compared to nucleotide sequences. A possible explanation is that the Flu task involves predicting sequence properties, where single-nucleotide mutations leading to amino acid variations (even synonymous codons resulting in the same amino acid) can alter the sequence properties. This aligns with the granularity of single nucleotides. In contrast, the Beta-Lac and EC tasks involve changes at the codon level. Beta-Lac focuses on property changes due to codon mutations, while EC examines how amino acid changes affect protein function. Therefore, the tokenization methods used by DNA and RNA models (such as BPE, non-overlapping 6mer, and single) provide good generalization at the single-nucleotide level but perform relatively poorly on tasks that emphasize the importance of non-overlapping 3mers, compared to directly modeling amino acid sequences.\\n### **Weakness9: Limitations of Multi-Omics Foundation Models**\\n\\nCurrent multi-omics models still have room for improvement, with the main limitations being the following:\\n\\n- Simple Concatenation for Single-Omics Model Integration: Current representation fusion methods rely on very simple concatenation techniques, which limit the exploration of information associations between different omics.\\n- Lack of Biological Prior Knowledge and Task-Specific Optimization: Current multi-omics methods often directly adopt the transfer learning approach from language models, focusing more on the generalizability of sequence representations. They do not incorporate prior biological knowledge and lack specific optimizations or post-processing tailored to downstream tasks.\\n- Primitive Pre-training Methods for Multi-Omics Data: The pre-training methods for different omics data in current multi-omics models are still quite basic. We observe consistently lower performance in experiments involving proteins, which may indicate insufficient learning of protein representation knowledge.\"}", "{\"summary\": \"The paper presents the performance of eight foundation models of varying design on 17 tasks of varying underlying biological attributes. The authors adapted and trained the models for the\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The authors grouped various tasks from multiple sources and did exceptional work in clearly detailing the underlying biological importance of the task. They choose multi-omics data. The task not only describes different biological aspects but also varying computational types, as some are classification and some are regression tasks. The training data process was clearly detailed.\\nThey displayed the performance of cross-omics tasks by combining models, a crucial aspect and direction for future research.\", \"weaknesses\": \"The manuscript does not sufficiently explain the primary criteria for selecting models or tasks. For instance, the authors did not include models like HyenaDNA; it is unclear if this was due to lack of availability, popularity, or suitability. Furthermore, the rationale behind choosing specific tasks, such as those related to gene expression, is underdeveloped. Additional databases like PanglaoDB could have been considered to broaden the task scope. In addition, the task generation.The authors' decision to group cell types also introduces a layer of specificity that might limit the generalizability of conclusions across model families or architectures. If I'm not mistaken the tasks are not shared and if not require some summary statistics to support the conclusions.\\n\\nModel performance is evaluated based on fine-tuning, but it is unclear whether the observed results are due to the model architecture or the fine-tuning process itself. This could be addressed by performing multiple training runs and reporting the mean and variance to provide a more reliable performance measure. Also, only a single metric is shared, which again limits the ability to generalize. \\nThe term frozen in Table Three is not fully explained. I assume it refers to the use of the model released weights if so for DNABERT the performance of the frozen is better than the non-frozen?. The difference in performance between the frozen and unfrozen should be discussed in the results. \\n\\nIn summary, three main limitations affect this study: (1) the absence of clear criteria for model and task selection, (2) a lack of performance confidence intervals, and (3) insufficient detail in task creation. These weaknesses limit the generalizability of the findings, and hinder reproducibility and application in other contexts. Even the conclusion that \\u201clanguage models outperform na\\u00efve supervised models\\u201d is hard to ascertain for the\", \"questions\": \"1.\\tDetail the criteria for choosing models?\\n2.\\tDetail the criteria for creation of the specific tasks (why this DB why not another)?\\n3.\\tElaborate on the performance add confidence intervals and other metrics?\\n4.\\tWhat does the term frozen mean?\\n5.\\tExpand on the task creation process or supply the tasks or the summary statistics.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a comprehensive multi-omics benchmark, called COMET (Benchmark for Biological COmprehensive Multi-omics Evaluation Tasks and Language Models).\\nThe benchmark is designed to evaluate models for diverse single/cross/multi-omics tasks.\\nThe reviewers note that the presented benchmark is comprehensive and has the potential to provide useful resources for the evaluation and optimization of (multi-omics) language models.\\nHowever, more comprehensive benchmarking results across different models, additional and deeper insights into the major factors of the various models leading to their performance assessment outcomes, further rationale/explanation regarding the benchmark and assessments (e.g., for model selection, task design, benchmark construction, and providing additional context of the current work in relation to other recent papers may be required to further strengthen the current study.\", \"additional_comments_on_reviewer_discussion\": \"The authors have provide extensive additional explanations, justification, and additional results, which have addressed some of the reviewers' initial concerns.\\nThere have been some disagreement between the authors and reviewers regarding the main scope & contribution of the work (e.g., regarding methodological contribution) as well as what would be required in good/useful benchmark papers.\\nThese have been taken into consideration in the AC's recommendation.\\nWhile the AC sees the potential value of the current work, there appears to be room for further improvement as noted above and the AC expects that the manuscript can benefit from a major revision to address these points.\"}", "{\"title\": \"Thanks for your feedback\", \"comment\": \"Thank you for your thoughtful feedback and for increasing your score. We appreciate your acknowledgment of the progress made in addressing your concerns.\\n\\nTo further address your suggestion regarding model sizes, we conducted additional experiments exploring the impact of model scale. Specifically, we evaluated NTv2 and ESM-2 models of varying sizes across DNA, RNA, and protein tasks.\\n\\n- **For tasks aligned with the model's corresponding omics domain,** such as NTv2 on DNA and RNA tasks and ESM-2 on protein tasks, the results indicate that larger model sizes generally lead to improved performance, indicating task-specific scaling benefits.\\n\\n- **For other omics tasks,** we observed that even a smaller model like ESM-2_8m can achieve performance comparable to the larger NTv2 model. We hypothesize that this may be attributed to the significantly larger volume of pretraining sequences available for ESM-2 on protein data compared to NTv2 on DNA data. This insight could guide future research on pretraining nucleotide models with larger datasets for enhanced cross-omics capabilities.\\n\\nWe hope these findings add further clarity and demonstrate our commitment to enhancing the rigor and depth of our benchmarking analysis. Please let us know if there are any additional aspects you would like us to address.\\n| | DNA | | RNA | Protein |\\n|-----------------------------------------|----------|---------|-------|--------------|\\n| Model/Task | EA (Dev) | EA (Hk) | APA | Ther |\\n| Metric | PCC | PCC | R^2 | Speraman's p |\\n| Pretrained Omics Language Model(Frozen) | | | | |\\n| NTv2_50m | 30.24 | 30.64 | 26.47 | 61.09 |\\n| NTv2_100m | 29.36 | 28.89 | 23.09 | 60.95 |\\n| NTv2_250m | 36.44 | 37.99 | 31.92 | 61.45 |\\n| NTv2_500m | 32.80 | 35.22 | 28.86 | 58.90 |\\n| | | | | |\\n| ESM-2_8m | 48.33 | 65.13 | 38.16 | 61.15 |\\n| ESM-2_35m | 47.84 | 61.57 | 45.79 | 63.07 |\\n| ESM-2_150m | 43.27 | 54.73 | 35.51 | 63.02 |\\n| ESM-2_650m | - | - | 43.57 | 64.76 |\"}", "{\"title\": \"Answer to Reviewer 6UP5\", \"comment\": \"### **Question3: Confidence Interval of The Experiment**\\n\\nThank you for your valuable suggestions. To address your concerns, we conducted additional experiments to evaluate the impact of varying random seeds under identical optimal training hyperparameters. By reporting the mean and standard deviation of the results, we aim to provide greater confidence in our results. Specifically, we employed three random seeds for these experiments. Due to limitations in computational resources and time, we prioritized completing the full set of single-omics (Table 1 and 2) and multi-molecular (Table 3 and 4) tasks with the backbone frozen and naive supervised model and the results are shown in the following table.\\n\\n**More seeds against one seed experiments for single-omics results**\", \"table_1\": \"Results of different models on single-molecular tasks evaluated with three random seeds. Mean (std) is reported for each experiment.\\n\\n||DNA|||RNA|||Protein|||\\n|:-------------------------------:|:-----:|:-------:|:------:|:-----:|:-----:|:-----:|:-------:|:------------:|:-----:|\\n|Model/Task|GE|EA(Dev)|EA(Hk)|APA|PRS|SSP|Cont|Ther|EC|\\n|Metric|R^2|PCC|PCC|R^2|R^2|F1|P@ L/5|Speraman's p|fmax|\\n|Naive Supervised Model||||||||||\\n|CNN|37.38(2.59)|66.03(0.15)|74.48(0.21)|50.93(0.17)|45.40(0.66)|49.95(0.82)||||\\n|ResNet|35.73(2.67)|67.29(0.15)|75.92(0.10)|56.45(0.94)|55.21(0.28)|57.26(3.14)||||\\n|LSTM|41.63(0.29)|68.82(0.14)|76.83(0.22)|67.03(0.86)|55.45(0.71)|58.61(0.21)||||\\n|Pretrained Omics Language Model (Frozen)||||||||||\\n|DNABERT2|14.19(0.67)|38.46(0.02)|40.45(1.13)|38.68(1.77)|20.37(0.37)|13.56(1.66)|7.09(1.41)|59.07(1.66)|47.18(1.41)|\\n|NTv2|13.05(1.54)|30.80(1.25)|31.70(2.43)|36.25(5.74)|25.22(1.91)|14.79(1.02)|7.26(0.17)|60.08(1.02)|40.47(0.17)|\\n|RNA-FM|37.18(0.20)|36.27(0.04)|37.87(0.26)|30.40(2.42)|20.21(0.15)|64.64(0.64)|1.71(1.76)|56.28(0.64)|29.56(1.76)|\\n|BEACON-B|23.16(0.53)|34.47(0.10)|38.55(0.07)|35.65(9.37)|25.81(1.01)|58.51(0.59)|10.74(1.63)|57.52(0.59)|37.19(1.63)|\\n|ESM-1b|30.96(4.13)|51.78(0.86)|63.48(0.49)|52.48(4.48)|51.89(0.36)|30.52(0.83)|38.65(0.11)|69.43(0.83)|88.08(0.11)|\\n|ESM2|31.94(3.08)|45.31(1.92)|56.87(3.70)|36.37(7.86)|35.57(0.19)|44.65(2.90)|43.32(0.49)|65.47(2.90)|77.60(0.49)|\\n|LucaOne|41.25(0.57)|46.86(0.01)|51.44(0.01)|40.38(0.61)|38.85(0.11)|57.66(0.30)|4.00(0.06)|66.66(0.30)|73.69(0.06)|\", \"table_2\": \"Results of different models on single-molecular tasks evaluated with one random seed.\\n\\n| | DNA | | | RNA | | | Protein | | |\\n|:----------------------------------------:|:-----:|:-------:|:------:|:-----:|:-----:|:-----:|:-------:|:------------:|:------:|\\n| Model/Task | GE | EA(Dev) | EA(Hk) | APA | PRS | SSP | Cont | Ther | EC |\\n| Metric | R^2 | PCC | PCC | R^2 | R^2 | F1 | P@ L/5 | Speraman's p | fmax |\\n| Naive Supervised Model | | | | | | | | | |\\n| CNN | 34.52 | 66.08 | 74.29 | 50.93 | 45.2 | 49.95 | | | |\\n| ResNet | 38.65 | 67.41 | 75.84 | 56.45 | 55.33 | 57.26 | | | |\\n| LSTM | 41.34 | 68.93 | 77.02 | 67.03 | 56.54 | 58.61 | | | |\\n| Pretrained Omics Language Model (Frozen) | | | | | | | | | |\\n| DNABERT2 | 13.82 | 39.44 | 41.75 | 40.48 | 19.99 | 13.51 | 11.24 | 60.94 | 47.36 |\\n| NTv2 | 13.78 | 29.36 | 28.89 | 30.86 | 23.09 | 14.72 | 7.48 | 60.95 | 40.57 |\\n| RNA-FM | 37.15 | 36.31 | 37.86 | 32.88 | 20.05 | 64.43 | 2.07 | 56.80 | 29.93 |\\n| BEACON-B | 23.61 | 34.57 | 38.63 | 41.21 | 25.73 | 58.73 | 12.15 | 57.18 | 39.08 |\\n| ESM-1b | 28.97 | 51.71 | 63.31 | 53.11 | 52.29 | 31.25 | 39.09 | 69.83 | 88.17 |\\n| ESM2 | 28.99 | 43.27 | 54.73 | 27.38 | 35.51 | 44.08 | 43.34 | 63.02 | 77.80 |\\n| LucaOne | 41.48 | 46.86 | 51.44 | 40.11 | 38.95 | 56.86 | 3.84 | 66.33 | 73.65 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thank you for your feedback. To address your remaining concerns\", \"comment\": \"Thank you for your response. We genuinely aim to address your concerns.\\n\\n- Our benchmark carefully considers dataset diversity and already includes multiple datasets for the same task. For example, the enhancer activity prediction task incorporates two datasets: one for housekeeping enhancers and another for developmental enhancers. This enables a more comprehensive evaluation of enhancer regulatory capabilities from different perspectives.\\n- Additionally, we have enriched the diversity of datasets for other tasks as well. For the APA Isoform task, we collected three additional datasets on Alternative Polyadenylation from Massively Parallel Reporter Assays and conducted corresponding experimental evaluations, as detailed in the table below.\\n\\n| Model/Task | APA_testset | HSPE1 | SNHG6 | WHAMMP2 |\\n|:------------------------:|:-----------:|:-----:|:-----:|:-------:|\\n| Metric | R^2 | R^2 | R^2 | R^2 |\\n| Literature SOTA | | | | |\\n| APARENT | 57.68 | 33.15 | 9.55 | 20.12 |\\n| Naive Supervised Model | | | | |\\n| CNN | 50.93 | 10.24 | 2.91 | 4.38 |\\n| ResNet | 56.45 | 13.91 | 0.77 | 6.41 |\\n| LSTM | 67.03 | 35.79 | 4.22 | 19.42 |\\n| full parameter fine-tune | | | | |\\n| DNABERT2 | 72.40 | 34.69 | 0.24 | 16.32 |\\n| NTv2 | 68.75 | 35.09 | 11.08 | 17.53 |\\n| RNA-FM | 70.32 | 39.24 | 7.34 | 16.43 |\\n| BEACON-B | 70.59 | 27.54 | 0.04 | 8.71 |\\n| ESM-1b | 68.82 | 42.87 | 10.99 | 16.88 |\\n| ESM-2 | 69.52 | 41.31 | 10.68 | 20.27 |\\n| LucaOne | 69.25 | 39.78 | 10.62 | 13.74 |\\n| freeze backbone | | | | |\\n| DNABERT2 | 40.48 | 0.56 | 0.04 | 0.17 |\\n| NTv2 | 30.86 | 0.96 | 0.03 | 0.24 |\\n| RNA-FM | 32.88 | 0.61 | 0.91 | 0.05 |\\n| BEACON-B | 41.21 | 1.64 | 0.22 | 1.97 |\\n| ESM-1b | 53.11 | 23.53 | 7.91 | 9.52 |\\n| ESM2 | 27.38 | 1.01 | 4.73 | 0.89 |\\n| LucaOne | 40.11 | 4.66 | 1.04 | 1.72 |\\n\\n#### **1. Performance Trends Across Datasets**\\n\\n- **APA_testset is the Benchmark Leader**:\\n - Most models achieve their best $R^2$ on this dataset, likely due to its relatively simpler or more predictable biological patterns.\\n\\n- **HSPE1 is Moderately Predictable**:\\n - Performance on HSPE1 is lower than APA_testset but higher than SNHG6 and WHAMMP2.\\n - Pretrained models like ESM-1b show the strongest results here.\\n\\n- **SNHG6 and WHAMMP2 are the Most Challenging**:\\n - Both datasets consistently yield the lowest $R^2$ scores, with SNHG6 performing particularly poorly for most models.\\n - This indicates a need for more sophisticated or specialized models for these datasets.\\n\\n#### **2. General Trend of Models Across Datasets**\\n\\n- **Pretrained Models Dominate**:\\n - Fine-tuned pretrained models (e.g., DNABERT2, ESM variants) consistently outperform classical architectures on all datasets.\\n - However, the advantage of pretrained models diminishes for SNHG6 and WHAMMP2.\\n\\n- **Classical Models Are Dataset-Dependent**:\\n - While classical models (e.g., LSTM) perform reasonably well on APA_testset, they struggle on other datasets, highlighting their limited generalizability.\"}", "{\"summary\": \"This paper introduces a comprehensive multi-omics benchmark encompassing a diverse collection of 17 cross-omics downstream tasks and datasets. It evaluates a set of state-of-the-art (SOTA) foundational language models, providing detailed descriptions of both implementation and outcomes. The project represents a significant amount of work and offers valuable information for the research community. The paper is well-written and easy to follow. While the paper excels as a resource, it lacks methodological novelty and the findings are largely intuitive. Addressing these areas and providing more in-depth analysis and discussion would significantly enhance the paper's contribution to the field.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Comprehensive Benchmark: The introduction of a multi-omics benchmark covering 17 diverse tasks and datasets is a significant contribution. It provides a standardized framework for evaluating the performance of various models across different omics data types.\", \"detailed_implementation\": \"The paper includes thorough descriptions of the implementation process, which enhances reproducibility and allows other researchers to build upon this work.\", \"valuable_resource\": \"The benchmark and the accompanying evaluations serve as a valuable resource for the community, facilitating future research and development in the field of multi-omics.\", \"weaknesses\": \"Methodological Novelty: While the paper is resource-rich, it lacks methodological innovation. The primary focus is on benchmarking existing models rather than introducing new techniques or approaches.\", \"insightfulness_of_findings\": \"Many of the findings presented are intuitive and do not offer deep insights into the underlying mechanisms or potential improvements. More in-depth analysis and discussion of the results would enhance the paper's impact.\", \"questions\": \"Benchmark Scope: While the benchmark is comprehensive, it would be beneficial to discuss any limitations or potential biases in the selection of tasks and datasets. This would provide a more balanced perspective and guide future expansions of the benchmark.\", \"comparison_with_existing_benchmarks\": \"A comparison with existing multi-omics benchmarks, if any, would help contextualize the contributions of this work and highlight its unique aspects.\", \"future_directions\": \"The paper could benefit from a discussion on potential future directions, such as the integration of additional omics data types, the development of new evaluation metrics, or the exploration of novel model architectures.\", \"practical_applications\": \"Including examples of practical applications or case studies where the benchmark has been used to derive meaningful biological insights would demonstrate the real-world utility of the resource.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your reply\", \"comment\": \"Thanks for your work, but I still intend to keep my scores. My question is more like including more diverse datasets and justify the reasons of including such datasets. As a researcher doing benchmarking analysis, you need to prove that your benchmark analysis is comprehensive and you should have the same criteria for different tasks. For example, you have included more datasets for only two tasks, how about other tasks? Does it mean we cannot find enough datasets for certain tasks, or it means it is too early to perform a benchmarking analysis for this task given limited resources?\\n\\nBack to the two tasks you include with more datasets, I think there exists overlap between housekeping enhancers and developmental enhancers, at least in fly (https://www.nature.com/articles/s41467-024-52921-2) and their working principles are different. How to ensure that your selected datasets are independent and if the same enhancers have different outputs across these two conditions, how to interpret the results? There are many things we need to consider other than reporting scores.\\n\\nFinally, I intend to emphaize that I am not a bad reviewer, I do not perform benchmarking analysis like you did (so there is no conflict) but I am interested in using a good DNA sequence morel for exciting applications, that is why I am very serious of evaluating a benchmarking manuscript. I think you have much more space to explore and improve your manuscript to finally make a great contribution in this field.\"}", "{\"title\": \"Answer to Reviewer TFBg\", \"comment\": [\"We sincerely appreciate Reviewer TFBg's thoughtful feedback, and here we provide corresponding responses to address these concerns.\", \"### **Weakness1: Details About Foundation Models**\", \"Thank you for your suggestions. To better facilitate reader understanding, we have added more detailed information about the foundational biological models we used in the manuscript `APPENDIX A.6` in `blue`. The revisions primarily involve two aspects.\", \"We have provided more detailed information on the data and tasks involved in the pre-training process of different foundational models.\", \"We have conducted an analysis of the potential strengths and weaknesses of different models.\", \"All pre-trained biology foundation models utilize a Masked Language Modeling (MLM) objective.\", \"DNABERT2 utilized datasets from the human genome and multiple species genomes, totaling 35.24 billion nucleotide bases. The human genome dataset comprised 2.75 billion nucleotide bases, while the multi-species genome dataset included 32.49 billion nucleotide bases from the genomes of 135 different species. During the data processing, all sequences containing 'N' were removed, leaving only those composed of ATCG nucleotides.\", \"NTv2 leverages three datasets: the Human reference genome dataset, which contains 3.2 billion nucleotides; the 1000 Genomes Project (1000G) dataset, featuring over 20.5 trillion nucleotides; and the Multispecies dataset, comprising 174 billion nucleotides from 850 species. During the preprocessing phase, all nucleotides outside of ATCG are replaced with 'N'. For both the multispecies and human reference datasets, the genomes are segmented into overlapping chunks of 6,100 nucleotides. Each chunk overlaps with the previous one by sharing the first 50 nucleotides and with the next one by sharing the last 50 nucleotides.\", \"The RNA-FM model is pre-training utilizing data sourced from RNACentral. To ensure the non-redundancy of the dataset, RNA-FM employs CD-HIT (specifically, CD-HIT-EST) with a threshold set at 100\\\\% sequence identity. This process led to a final dataset comprising 23.7 million distinct RNA sequences.\", \"The BEACON-B model uses 523,934 human ncRNA sequences filtered from the total ncRNA in the RNACentral database as pre-training data.\", \"The ESM-1b model was pre-trained on Uniref50, which comprises approximately 30 million protein sequences. During the training process, sequences exceeding 1023 tokens (excluding the CLS token) are randomly truncated to a length of 1023 tokens.\", \"The ESM-2 model is trained using the UniRef50 dataset. To enhance the data volume and diversity, during each training update, a mini-batch of sequences from UniRef50 is sampled and replaced with sequences uniformly sampled from the corresponding UniRef90 clusters. This approach allows the ESM-2 model to be trained on over 60 million protein sequences.\", \"The CaLM model is pre-trained using cDNA data collected from the European Nucleotide Archive database. During preprocessing, sequences containing unknown nucleotides, start codons that are not ATG, internal stop codons, or a nucleotide count not divisible by three are removed. The final dataset consists of about 9 million cDNA sequences.\", \"We can see that all models are trained using MLM, among which the NTv2 model uses much more pre-training data than other models, which may make it more generalizable on more tasks. The RNA-based models are all pre-trained only on ncRNA, and BEACON-B only uses a small part of the pre-training data, which may affect their potential performance. Models like the protein-based model and the Calm codon model use amino acid (non-overlapping 3mer) encoding, which may have an advantage in tasks that focus on amino acid expression. For the RNA-FM and ESM-1b models, the use of absolute position encoding limits the length of their input sequences, which may affect their performance on long sequence tasks.\", \"### **Weakness2: Fine-tuning Strategy Selection Criteria**\", \"We only utilize LoRA fine-tuning for the large model\\uff08LucaOne, which has about 1B parameters\\uff09to obtain a fair comparison with full fine-tuning as possible. It not only conserves computational resources but also ensures good performance of the model on downstream tasks.\", \"We employ full fine-tuning for all other models to fully leverage the potential of these models and ensure optimal results across various tasks.\"]}", "{\"title\": \"\\u2248\", \"comment\": \"Thank you for addressing this concern\"}", "{\"title\": \"Answer to Reviewer 6UP5\", \"comment\": \"We process data from 12 random 3' UTR libraries. 9 among the 12 libraries are used for training and 3 held out (the 3 held-out libraries were excluded from the current analysis). To construct a balanced test set, sequences from each library are first shuffled independently according to their read counts. These shuffled sequences are then merged using a round-robin approach, selecting one sequence from each library at a time in descending order of read count. This strategy ensures that the test set contains an even representation of high-read count sequences across all libraries. The remaining sequences are appended to the beginning of the combined library, and the training set is further shuffled to enhance randomness. For benchmarking purposes, the top 10\\\\% of high-read count sequences are prioritized. Among these, the most abundantly expressed sequences are selected for testing, ensuring a high-quality, balanced dataset for training, validation, and evaluation.\\n\\n* Programmable RNA Switches\\n We adopt the data generation pipeline described in [r4]. A toehold-switch library comprising 244,000 potential trigger sequences is designed and synthesized, covering the complete genomes of 23 pathogenic viruses, the entire coding regions of 906 human transcription factors, and approximately 10,000 random sequences. Using this synthesized oligo pool, two construct libraries are created to represent the ON and OFF states, and both are transformed into BL21 E. coli. The OFF library includes toehold-switch constructs without triggers, while the ON library contains identical toeholds paired with complementary triggers fused to their respective switches.\\n\\n The libraries are sorted into four bins using fluorescence-activated cell sorting (FACS), and the variants in each bin are quantified through next-generation sequencing (NGS) to determine their fluorescence distributions. After quality control, the toehold-switch library consists of 109,067 ON-state measurements, 163,967 OFF-state measurements, and 91,534 ON/OFF paired ratios, where both states are characterized for each switch. ON and OFF data are normalized to a scale of 0 to 1, with ON/OFF ratios normalized to a range of -1 to 1. Following [r4], a stringent quality control process is applied to eliminate artifacts and ensure data reliability. The quality control (QC) framework includes five levels: QC1, QC2, QC3, QC4 and QC5, where QC1 represents the lowest quality and QC5 the highest. Datasets above QC2 are utilized for training, while QC5 is reserved for testing.\\n\\n* Secondary Structure Prediction\\n We follow the preprocessing steps outlined in the bpRNA-1m dataset [r5].To reduce sequence redundancy and improve dataset diversity, we implement an 80\\\\% sequence-identity threshold and cap the maximum sequence length at 500 nucleotides, following protocols described in the referenced studies. These measures are essential for minimizing overfitting and ensuring that the models are trained on a wide range of genetically diverse samples.\", \"the_dataset_is_divided_into_three_subsets\": \"a training set (TR0), a validation set (VL0), and a test set (TS0). The splitting process is randomized to eliminate potential biases and ensure an unbiased evaluation of the model\\u2019s performance.\\n\\n* Protein Tasks\\n We obtain data on thermostability prediction, enzyme commission number prediction and contact map prediction from Saprot [r6]. Following the guidance on github, we download data and place it in the LMDB folder for supervised fine-tuning.\\n\\n* Cross-molecular Tasks\\n For the enzyme commission number prediction task, to obtain the codon information corresponding to protein sequences, we use the UniProtKB mapping function to convert UniProt IDs into European Nucleotide Archive entries. We then employ the Smith-Waterman algorithm to quickly match the corresponding codon sequences, filtering out all sequences that contained unknown nucleotides or where the number of matched nucleotides is not a multiple of three. For other cross-omics tasks, we adopt the data and settings from [r7].\\n\\n[r4] A deep learning approach to programmable RNA switches. Nature Communications 2020.\\n\\n[r5] bpRNA: large-scale automated annotation and analysis of RNA secondary structure. Nucleic Acids Research 2018.\\n\\n[r6] Saprot: Protein language modeling with structure-aware vocabulary. ICLR 2024.\\n\\n[r7] Are genomic language models all you need? exploring genomic language models on protein downstream tasks. Bioinformatics 2024.\"}", "{\"title\": \"Thanks for your reply, but sorry they are not satisfiable.\", \"comment\": \"Thanks for your reply, which addresses some of my concerns. However, my major concerns including the lack of data diversity as well as baselines are not fully addressed and thus I intend to keep my score. It seems that the authors are not very familiar with sequence to function modelling or molecular biology, and I do not think it is reasonable to argue that you just follow others' ideas without justifying whether their option is correct or not. This is extremely important for a successful benchmarking paper. Furthermore, Enformer is already widely used in gene expression prediction task, people even finetuned it if needed (https://www.biorxiv.org/content/10.1101/2024.07.27.605449v1). I do not understand why the authors argue the reason based on computation resources. We do need comprehensive data and baselines to make a fair conclusion.\"}", "{\"title\": \"Thanks for your response, but I will keep my score\", \"comment\": \"Thanks for your response, but they do not directly answer my questions or address my concerns. I believe that a benchmarking paper should use various datasets to increase the sample size and compare the model performance under different conditions to report an average effect. It has high risk to have a biased evaluation based on a single dataset. Since you mentioned that you performed a comprehensive benchmarking analysis, this point is even more important, which is the limit of a good benchmarking paper. I will raise the same questions for these benchmarking papers if they only include one dataset, and people will not trust it easily. Also, my question about information overlap is not well-addressed. I suggest the authors to investigate the biology background and discuss the shared gene elements in certain datasets, and explore if their effects are different or not.\"}", "{\"title\": \"Answer to Reviewer 6UP5\", \"comment\": \"Table 4: Results of different models on homo-omics multi-molecules evaluated with one random seed.\\n\\n| Model/Task | EPI | Model/Task | AAN | Model/Task | siRNA |\\n|:----------------------------------------:|:-------:|:-----------------:|:-------:|:-----------------:|:---------------:|\\n| Metric | MCC (%) | Metric | MCC (%) | Metric | Mixed Score (%) |\\n| Naive Supervised Model | | | | | |\\n| CNN | 25.04 | CNN | 39.08 | CNN | 56.41 |\\n| ResNet | 56.76 | ResNet | 45.79 | ResNet | 61.74 |\\n| LSTM | 58.47 | LSTM | 39.73 | LSTM | 48.69 |\\n| Pretrained Omics Language Model (Frozen) | | | | | |\\n| DNABERT2+NTv2 | 11.67 | ESM1b+ESM2 | 44.31 | BEACON-B+RNA-FM | 49.86 |\\n| DNABERT2+DNABERT2 | 10.60 | ESM1b+ESM1b | 48.09 | BEACON-B+BEACON-B | 49.73 |\\n| NTv2+DNABERT2 | 6.47 | ESM2+ESM1b | 42.8 | RNA-FM+BEACON-B | 50.05 |\\n| NTv2+NTv2 | 13.06 | ESM2+ESM2 | 39.42 | RNA-FM+RNA-FM | 49.48 |\\n| LucaOne | 15.16 | LucaOne | 25.55 | LucaOne | 50.13 |\\n| | | | | | |\\n| Model/Task | RPI | Model/Task | CRI-Off | Model/Task | DPF |\\n| Metric | MCC (%) | Metric | SCC (%) | Metric | LDDT (%) |\\n| Naive Supervised Model | | | | | |\\n| CNN | 86.25 | CNN | 10.86 | CNN | 34.64 |\\n| ResNet | 87.39 | ResNet | 8.90 | ResNet | 33.14 |\\n| LSTM | 87.83 | LSTM | 7.63 | LSTM | 32.09 |\\n| Pretrained Omics Language Model (Frozen) | | | | | |\\n| ESM1b+RNA-FM | 83.96 | RNA-FM+NTv2 | 5.89 | NTv2+ESM-1b | 40.59 |\\n| ESM2+RNA-FM | 82.83 | RNA-FM+DNABERT2 | 3.87 | DNABERT2+ESM-1b | 39.90 |\\n| ESM1b+BEACON-B | 85.64 | BEACON-B+NTv2 | 4.7 | NTv2+ESM-2 | 42.53 |\\n| ESM2+BEACON-B | 84.01 | BEACON-B+DNABERT2 | 3.14 | DNABERT2+ESM-2 | 43.39 |\\n| LucaOne | 77.90 | LucaOne | 8.42 | LucaOne | 32.65 |\"}", "{\"title\": \"Answer to Reviewer 6UP5\", \"comment\": \"**More seeds against one seed experiments for multi-omics results**\", \"table_3\": \"Results of different models on homo-omics multi-molecules evaluated with three random seeds. Mean (std) is reported for each experiment.\\n\\n|Model/Task|EPI|Model/Task|AAN|Model/Task|siRNA|\\n|------------------------------------------|---------|-------------------|---------|-------------------|-----------------|\\n|Metric|MCC (%)|Metric|MCC (%)|Metric|Mixed Score (%)|\\n|Naive Supervised Model||||||\\n|CNN|27.19(1.88)|CNN|39.36(5.71)|CNN|57.35(1.10)|\\n|ResNet|56.26(0.66)|ResNet|38.79(6.79)|ResNet|52.67(13.07)|\\n|LSTM|58.78(2.80)|LSTM|38.39(1.55)|LSTM|47.85(1.50)|\\n|Pretrained Omics Language Model (Frozen)||||||\\n|DNABERT2+NTv2|13.17(1.30)|ESM1b+ESM2|43.79(1.39)|BEACON-B+RNA-FM|49.67(0.17)|\\n|DNABERT2+DNABERT2|13.86(3.90)|ESM1b+ESM1b|48.11(0.08)|BEACON-B+BEACON-B|49.60(0.39)|\\n|NTv2+DNABERT2|12.76(6.32)|ESM2+ESM1b|42.54(1.12)|RNA-FM+BEACON-B|49.76(0.26)|\\n|NTv2+NTv2|11.39(1.46)|ESM2+ESM2|38.42(0.95)|RNA-FM+RNA-FM|49.50(0.01)|\\n|LucaOne|15.84(0.61)|LucaOne|25.20(0.44)|LucaOne|50.04(0.29)|\\n|||||||\\n|Model/Task|RPI|Model/Task|CRI-Off|Model/Task|DPF|\\n|Metric|MCC (%)|Metric|SCC (%)|Metric|LDDT (%)|\\n|Naive Supervised Model||||||\\n|CNN|86.24(0.27)|CNN|10.19(1.06)|CNN|32.84(1.63)|\\n|ResNet|86.82(0.55)|ResNet|6.87(3.36)|ResNet|33.62(0.61)|\\n|LSTM|87.09(0.76)|LSTM|9.39(1.53)|LSTM|31.02(1.29)|\\n|Pretrained Omics Language Model (Frozen)||||||\\n|ESM1b+RNA-FM|84.16(0.18)|RNA-FM+NTv2|6.07(0.20)|NTv2+ESM-1b|39.15(1.29)|\\n|ESM2+RNA-FM|82.57(0.40)|RNA-FM+DNABERT2|1.87(1.96)|DNABERT2+ESM-1b|39.15(0.74)|\\n|ESM1b+BEACON-B|85.57(0.44)|BEACON-B+NTv2|3.11(1.72)|NTv2+ESM-2|42.47(0.93)|\\n|ESM2+BEACON-B|83.37(0.58)|BEACON-B+DNABERT2|3.18(1.76)|DNABERT2+ESM-2|42.69(0.71)|\\n|LucaOne|77.80(0.13)|LucaOne|9.17(0.69)|LucaOne|35.94(3.17)|\"}", "{\"title\": \"Consider our benchmark scope\", \"comment\": \"Thank you for your detailed response. We appreciate your concerns and would like to response more comprehensively.\\n\\n**Scope and Focus of Our Benchmark:** The primary goal of our work is to evaluate across different omics domains (DNA, RNA, and proteins). To mitigate bias, we have included multiple tasks for each omics domain. We kindly ask you to consider our contributions from this broader perspective of multi-omics benchmarking, rather than focusing solely on whether individual tasks have multiple datasets. This approach aligns with the design of other established biological benchmarks like BEACON (NeurIPS 2024), DART-Eval (NeurIPS 2024), BEND (ICLR 2024), PEER (NeurIPS), which also emphasize comprehensive scope rather than the inclusion of multiple datasets for every single task.\\n\\n**Tasks with Multiple Datasets:** The sequences in the two types of enhancer activity prediction datasets (housekeeping and developmental promoters) overlap intentionally, as the focus is on evaluating the activity differences of the same sequences under these two conditions. Furthermore, even if the enhancer activity task is not strictly considered as having multiple datasets for a single task, we have also supplemented the Alternative Polyadenylation (APA) task with additional datasets to enhance the study of isoform diversity, including HSPE1, SNHG6, and WHAMMP2. Our current benchmark already encompasses 17 tasks, representing a substantial effort. In future iterations, we will take your suggestions into account and further expand the number of datasets for each task to ensure a more comprehensive evaluation from a biological perspective.\"}", "{\"title\": \"Finetuning schemes and data nature/quality\", \"comment\": \"* Protein Tasks\\n We obtain data on thermostability prediction, enzyme commission number prediction and contact map prediction from Saprot [r6]. Following the guidance on github, we download data and place it in the LMDB folder for supervised fine-tuning.\\n\\n* Cross-molecular Tasks\\n For the enzyme commission number prediction task, to obtain the codon information corresponding to protein sequences, we use the UniProtKB mapping function to convert UniProt IDs into European Nucleotide Archive entries. We then employ the Smith-Waterman algorithm to quickly match the corresponding codon sequences, filtering out all sequences that contained unknown nucleotides or where the number of matched nucleotides is not a multiple of three. For other cross-omics tasks, we adopt the data and settings from [r7].\\n\\n* Enhancer-Promoter Interaction Prediction\\n We follow the processing of [r8]. We derive the dataset from EPIANN[r9], which includes six cell lines, GM12878, HeLa-S3, IMR90, K562, HUVEC and NHEK. To address the challenge of data imbalance, EPIANN enhanced the representation of positive samples by incorporating the upstream and downstream regions of enhancers. This approach expanded the dataset to include relevant genomic regions by defining extended windows of 3 kbp around enhancers and 2 kbp around promoters, ensuring a more comprehensive capture of the surrounding regulatory landscape.\\n\\n* siRNA Efficiency Prediction\\n We get the dataset from SAIS[r10]. We use the information of the reference sequence of the target gene, the sense sequence of the target gene, the sense sequence of modified siRNA and the remaining percentage of mRNA after the experiment named `gene_target_seq`, `siRNA_sense_seq`, `modified_siRNA_sense_seq`, and `mRNA_remaining_pct` in dataset from SAIS, respectively.\\n\\n* Antibody-Antigen Neutralizability Prediction\\n We follow [r11], which provides a minimal dataset specifically designed for this prediction task. This task is based on two datasets: CATNAP[r12], which focuses on HIV, and CoVAbDab[r13], which pertains to SARS-CoV-2.\\n HIV data is sourced from CATNAP in the Los Alamos HIV Database. Antibody (Ab) and antigen (Ag) sequences are extracted, curated to remove duplicates and missing values, and classified as neutralizing (IC\\u2085\\u2080 < 10 \\u03bcg/ml) or non-neutralizing (IC\\u2085\\u2080 \\u2265 10 \\u03bcg/ml). Seen and unseen Abs are split, ensuring no overlap between training, validation, and testing sets by excluding similar pairs (BlastP \\u2265 90%). Training is conducted on seen Abs, with unseen Abs used for evaluation across 20 random dataset splits.\\n SARS-CoV-2 Data is collected from CoVAbDab and includes pairwise Ab\\u2013Ag instances across variants like Alpha, Beta, Delta, and Omicron. Five sequences per variant and 11 for Omicron are used. Omicron is treated as an unseen Ag, excluded from training but incorporated in relation graphs for transductive learning, enabling the identification of broad-spectrum Abs.\\n\\n* CRISPR Off-Target Prediction Following [r14], we get the off-target dataset, which comprises two different cell types containing 30 sgRNAs. For all 30 sgRNAs, approximately 160,000 possible off-target sites across the entire genome are obtained. Off-target sites are annotated and standardized using the targeting cutting frequency (indel frequency) detected by different off-target detection methods.\\n\\n[r6] Saprot: Protein language modeling with structure-aware vocabulary. ICLR 2024.\\n\\n[r7] Are genomic language models all you need? exploring genomic language models on protein downstream tasks. Bioinformatics 2024.\\n\\n[r8] Predicting enhancer-promoter interactions by deep learning and matching heuristic. Briefings in Bioinformatics 2021.\\n\\n[r9] Modeling enhancer-promoter interactions with attention-based neural networks. bioRxiv 2017.\\n\\n[r10] http://competition.sais.com.cn/competitionDetail/532230/format\\n\\n[r11] Predicting unseen antibodies' neutralizability via adaptive graph neural networks. Nature Machine Intelligence 2022.\\n\\n[r12] CATNAP: a tool to compile, analyze and tally neutralizing antibody panels. Nucleic Acids Research 2015.\\n\\n[r13] CoV-AbDab: the coronavirus antibody database. Bioinformatics 2021.\\n\\n[r14] DeepCRISPR: optimized CRISPR guide RNA design by deep learning. Genome Biology 2018.\"}", "{\"title\": \"Answer to Reviewer TFBg\", \"comment\": \"### **Weakness3: Rationale Behind Selecting Metrics**\\n\\nTo ensure fairness and consistency, we maintain the evaluation metrics that are consistent with the original task, facilitating alignment with the performance of the original task [r1,r2,r3]. For the evaluation metrics selection, this paper encompasses three categories of 17 tasks: Single-omics, Cross-molecule, and Multi-molecules, with task objectives involving structure, function, and engineering design. The task types include single-label regression, multi-label regression, and multi-label classification, with test set sample sizes ranging from 40 to 49,755. And like other bio-benchmark work, such as the PEER benchamrk (NeurIPS 2022) [r2] also has different metrics for regression tasks such as SCC and RMSE, the BEACON benchmark (NeurIPS 2024) [r4] uses different metrics for regression tasks such as SCC, R^2, and RMSD, and similarly, the GUE benchamrk (ICLR 2024) [r5] uses different metrics for classification tasks such as MCC and F1.\\n\\n\\n[r1] SAPROT: PROTEIN LANGUAGE MODELING WITH STRUCTURE-AWARE VOCABULARY. ICLR 2024.\\n\\n[r2] PEER: A Comprehensive and Multi-Task Benchmark for Protein Sequence Understanding. NeurIPS 2022.\\n\\n[r3] Are genomic language models all you need? exploring genomic language models on protein downstream tasks. Bioinformatics 2024.\\n\\n[r4] BEACON: Benchmark for Comprehensive RNA Tasks and Language Models. NeurIPS2024.\\n\\n[r5] Dnabert-2: Efficient foundation model and benchmark for multi-species genome. ICLR 2024.\\n\\n### **Weakness4: Results and Findings Interpretation**\\n\\n\\nWe appreciate your valuable feedback. Our findings encompass a range of novel insights, which we categorize into two main types:\\n\\n1. **Novel discoveries** (these insights can inspire researchers to develop new algorithms and conduct experimental analyses):\\n\\n- Randomly initialized vocabulary embeddings reveal cross-omics knowledge learned during pre-training.\\n- Protein models enhance predictive performance in DNA and RNA regulatory tasks.\\n- DNA models demonstrate potential in protein and RNA tasks, reflecting DNA's foundational role in the central dogma.\\n- Single-omics models achieve competitive performance in their respective tasks, particularly structural tasks.\\n- Multi-omics models outperform single-omics models on multi-molecular tasks.\\n- Multi-molecular tasks remain significantly challenging and require further exploration.\\n2. **Findings aligned with existing work but expanded to broader contexts**:\\n\\n- CDS models demonstrate competitive performance on codon sequence data.\\n- Nucleotide models have the potential to rival CDS models.\\n(We extend findings similar to those in CaLM and \\\"Are Genomic Language Models All You Need?\\\" to more foundational models, including RNA models, making these discoveries more broadly applicable.)\\n\\n\\nIn response, we have reflected on your suggestions and revised the manuscript to enhance the depth and clarity of our findings. Key updates, highlighted in the manuscript `RESULTS` in `blue`, are as follows:\\n- Enhanced Results Structure: We have reorganized and refined the results section to present experimental outcomes more logically and concisely. This includes adjusting the sequence of conclusions and improving the flow of their presentation.\\n- Detailed Analysis: Additional explanations and context have been provided for the experimental results. For instance, we now discuss cross-omics adaptability, emphasizing how multi-omics models capture intricate molecular features by leveraging nucleotide, codon, and protein-specific representations. We also highlight the role of tailored pretraining on omics-specific data in achieving success.\\n- Deeper Insights: To make conclusions more insightful and actionable, we analyzed how nucleotide models implicitly learn codon patterns and adapt to cross-molecular tasks, demonstrating the potential of unified multi-omics representations. This insight underlines the need for architectural innovations and task-specific adaptations, particularly for tasks requiring highly specialized knowledge.\\n- Constructive Summaries: We added summary sections after the experimental analyses, offering clear and constructive takeaways. For example, the potential of multi-omics models to outperform task-specific models in capturing cross-omics dependencies is discussed, alongside the challenges faced by models like LucaOne in highly specialized tasks.\\n\\nThese revisions aim to enhance the manuscript's impact by providing deeper insights, actionable conclusions, and a clearer understanding of the results. Thank you for encouraging us to improve the quality of our work.\"}", "{\"title\": \"Answer to Reviewer 8ScB\", \"comment\": \"We sincerely thank the reviewer for the insightful feedback. Below are our responses to the points raised:\\n\\n### **Weakness1: Choices of Datasets**\\n\\nWe curated a comprehensive collection of tasks encompassing diverse molecular types and enlisted evaluations from several biology professors and PhD students. From these assessments, we selected a representative subset of tasks to establish the first multi-omics benchmark spanning DNA, RNA, and proteins.\\n\\n- As shown in Table 1 of our paper, the tasks were sourced from high-impact conferences, journals, and competitions, emphasizing literature with high citation counts. Many of these tasks have already been used to evaluate the performance of biological language models specific to their respective omics. For instance, Gene Expression data originates from Cell Reports, Enhancer Activity Prediction from Nature Genetics, and APA Isoform Prediction from Cell. Similarly, tasks such as Programmable RNA Switches and Secondary Structure Prediction derive from Nature Communications and Nucleic Acids Research, respectively, while others like Thermostability Prediction and Contact Map Prediction are sourced from NeurIPS and BMC Bioinformatics. The selected tasks span both single-omics and cross-omics domains and broadly encompass the aspects of structure, function, and engineering.\\n\\n- Notably, for DNA tasks, we included Gene Expression Prediction, which forecasts the cross-tissue expression levels of genes and transcription factors (TFs), shedding light on regulatory networks underlying cell states and tissue-specific genomic functions. Enhancer Activity Prediction, on the other hand, analyzes DNA sequences to predict enhancer activity on specific promoters, revealing how regulatory signals drive transcriptional specificity in different cell types. These tasks also vary significantly in sequence lengths\\u2014Gene Expression tasks use sequences of 6000 bp, while Enhancer Activity Prediction involves sequences of 249 bp for evaluation of model performance across varying DNA sequence lengths. For the other expression work, BEELINE [r1] and Li et al.\\u2019s [r2] focus on cell type identification and classification in single-cell RNA sequencing (scRNA-seq) datasets and provide curated lists of marker genes for specific cell types across tissues in humans and mice. However, it is mainly used in single-cell research with non-sequence data, such as scBERT [r3]. In our study, the gene expression task utilizes the Xpresso dataset from the Epigenomics Roadmap Consortium [r4], which is at the bulk level and emphasizes the regulatory effects of cell type-specific non-coding regions on gene expression, as well as inputs into the model as the sequence data.\\n\\n- Looking ahead, we aim to expand the benchmark by incorporating additional high-impact tasks that span multiple omics and broaden the scope of structural and functional predictions, driving innovation in bioinformatics and computational biology.\\n\\n[r1] Benchmarking algorithms for gene regulatory network inference from single-cell transcriptomic data. Nature Methods 2020.\\n\\n[r2] Benchmarking spatial and single-cell transcriptomics integration methods for transcript distribution prediction and cell type deconvolution. Nature Methods 2022.\\n\\n[r3] scBERT as a large-scale pretrained deep language model for cell type annotation of single-cell RNA-seq data. Nature Machine Intelligence 2022.\\n\\n[r4] Integrative analysis of 111 reference human epigenomes. Nature 2015.\\n\\n### **Weakness2: Metrics of Regression Tasks**\\n\\nWe maintain assessment metrics that are consistent with the original task, facilitating alignment with the performance of the original task. Like other bio-benchmark work, such as the PEER benchmark (NeurIPS 2022) [r1] also has different metrics for regression tasks such as SCC and RMSE, the BEACON benchmark (NeurIPS 2024) [r2] uses different metrics for regression tasks such as SCC, R^2, and RMSD, and similarly, the GUE benchmark (ICLR 2024) [r3] uses different metrics for classification tasks such as MCC and F1.\\n\\n[r1] Peer: a comprehensive and multi-task benchmark for protein sequence understanding. NeurIPS 2022.\\n\\n[r2] BEACON: Benchmark for Comprehensive RNA Tasks and Language Models. NeurIPS 2024.\\n\\n[r3] Dnabert-2: Efficient foundation model and benchmark for multi-species genome. ICLR 2024.\\n\\n\\n### **Weakness3: Version of DNABERT2**\\n\\nWe used the latest version of DNABERT2 to do the experiments. Without unfair comparison, since DNABERT2 uses extrapolatable positional encoding, we automatically adapted the length of the downstream task in our experiments, for example, in the gene expression prediction task, the token length can be up to about 1500.\\nWhat we report in Table 2 is the maximum token length of the original DNABERT2 in the downstream task, which we have updated to the pre-trained token length.\"}", "{\"title\": \"Answer to Reviewer TFBg\", \"comment\": \"### **Weakness5: Comparing Literature SOTA to Pretrained FMs**\\n\\n- Similar results that the pre-trained model is lower than the literature SOTA can also be observed in PEER benchmark [r4]. Literature SOTA models for specific downstream tasks can incorporate task-specific priors and post-processing. For example, in protein-related tasks SOTA, MSATrans introduces multiple sequence alignment (MSA), integrating evolutionary information related to proteins [r1]. SaProt incorporates a spatial structure vocabulary, combining tertiary structure information to aid functional prediction [r2]. In contrast, language models focus on the generalizability of sequence representations and do not include such task-specific information, which are orthogonal. For example, incorporating priors or additional features into language models can enhance task performance [r5]. However, this study primarily focuses on evaluating the representational capabilities of biological foundation models.\\n\\n- Furthermore, Literature SOTA model ESM-1v [r3], as a general protein language model, uses 98 million diverse protein sequences, significantly exceeding the data volume and diversity of ESM-1b. It achieves SOTA performance in protein Thermostability downstream task, demonstrating that increasing training data enhances the overall performance of language models and highlighting the significant potential of general biological language models. \\n\\n[r1] MSA Transformer. ICML 2021.\\n\\n[r2] SAPROT: PROTEIN LANGUAGE MODELING WITH STRUCTURE-AWARE VOCABULARY. ICLR 2024.\\n\\n[r3] Language models enable zero-shot prediction of the effects of mutations on protein function. NeurIPS 2021.\\n\\n[r4] Peer: a comprehensive and multi-task benchmark for protein sequence understanding. NeurIPS 2022.\\n\\n[r5] Predicting Antimicrobial Peptides Using ESMFold-Predicted Structures and ESM-2-Based Amino Acid Features with Graph Deep Learning. JCIM 2024.\\n\\n### **Weakness6: Performance on RNA Tasks**\\n\\nRegarding the issue that RNA foundation models have not achieved the top performance in any RNA tasks, we think the factors may originate from three aspects:\\n\\n- **Pre-training Data Scale:** RNA-FM is pre-trained on approximately 27 million sequences, while BEACON uses a much smaller subset of this data (only about 0.5 million sequences) to achieve efficient and cost-effective RNA models. In comparison, ESM-1b, which has a similar training scale, uses around 30 million protein sequences and performs worse than RNA foundation models on RNA downstream tasks. Models with better performance, such as ESM-2 and DNABERT2, are trained on nearly 60 million sequences. NTv2, which has superior comprehensive performance, is trained on an even larger dataset. This suggests that the scale of pre-training data can significantly impact performance on downstream tasks. \\n\\n- **Omics Type of Pre-training Data Type:** It is important to note that both RNA-FM and BEACON are trained using data from the RNACentral database, which primarily includes non-coding RNA. The exclusion of coding RNA might result in incomplete information learned by the models, potentially affecting their performance on downstream tasks. We observe that DNA models, while performing well on RNA property tasks, underperformed on RNA structure tasks. In contrast, protein models showed good generalization across various RNA tasks. We hypothesize that, based on the central dogma, DNA, RNA, and proteins share many similarities in their properties and functions, making knowledge transfer relatively easy. However, DNA sequences are less expressive of spatial structure information compared to proteins. Therefore, protein pre-trained models, which learn more structural information, generalize better to spatial structure tasks compared to DNA models.\"}", "{\"title\": \"Thank Reviewer 6UP5 so much\", \"comment\": \"We sincerely appreciate your recognition of our efforts to provide detailed explanations and additional experiments. Your constructive insights have been invaluable in improving the clarity and robustness of our work. We\\u2019re grateful for your support and encouragement!\"}", "{\"title\": \"Answer to Reviewer z8kJ\", \"comment\": \"### **Question 1: Discussion on Benchmark Scope**\\nOur principles for selecting tasks and datasets prioritize comprehensiveness and impactfulness. Comprehensiveness is ensured by including tasks and datasets across each omic type, their pairwise interactions, and biologically critical aspects in structure, function, and engineering. Impactfulness is achieved by selecting tasks and datasets from sources that are widely recognized as authoritative, peer-reviewed, and published in high-impact conferences or journals, as well as public datasets released in competitions.\\n\\nBased on these criteria, the limitations of our selection are as follows:\\n- We mainly focus on multi-omics sequence understanding tasks, there may be other types of data that can be expanded.\\n- Tasks involving interactions among more than two omics are not included.\\n- Biologically critical tasks and datasets that are not yet widely acknowledged or published in high-impact venues may be excluded.\\n- Informative but private datasets, which we do not have access to, are not considered.\\n\\nWe believe these limitations reflect the trade-offs made to ensure the benchmark remains comprehensive, impactful, and grounded in publicly available, well-regarded resources.\\n\\n\\n\\n### **Question 2: Comparision with Existing Benchmarks**\\nThere is limited work exploring multi-omics benchmarks. To the best of our knowledge, the closest related work is Boshar et al.\\u2019s [r1] analysis of gLMs applied to protein-related tasks. They established a benchmark for four state-of-the-art (SOTA) models over five common protein tasks, curating CDS sequences to enable fair comparisons between pLMs and gLMs. However, their focus remains largely on single-omics tasks, emphasizing codon-to-protein mappings without addressing the broader scope of multi-omics interactions.\\n\\nOur benchmark (COMET) includes 11 models (8 foundation models and 3 naive models) on a total of 17 tasks across multiple omics. COMET distinguishes itself through its comprehensive multi-omics focus, extending beyond codon-to-protein tasks to incorporate genomics, transcriptomics, and proteomics. This integration allows COMET to evaluate cross-omics interactions, such as antibody-antigen neutralization and RNA-protein interaction, addressing complex biological challenges that single-omics benchmarks cannot capture. COMET\\u2019s unique strength lies in its ability to assess cross-omics tasks and capture biological interdependencies across interconnected omics layers. By filling the gap in multi-omics benchmarking, COMET supports the development of integrative models capable of addressing the inherent complexity of biological systems. This positions COMET as a critical resource for advancing research in systems biology and multi-omics integration.\\n\\n\\n[r1] Are genomic language models all you need? exploring genomic language models on protein downstream tasks. Bioinformatics 2024.\\n### **Question 3: Future Directions**\\nWe appreciate the reviewer\\u2019s suggestion and agree that future directions are crucial for expanding the impact of this work. We propose three potential areas for further exploration, highlighted in the manuscript `APPENDIX A.8` in `orange`.\\n- Incorporating Additional Omics Data and Biological Priors:\\nExpanding the benchmark to include additional omics data types and features that indirectly influence biological processes in downstream tasks will enhance its comprehensiveness. Integrating biological priors, such as pathway-level annotations, protein-protein interactions, or chromatin accessibility maps, can provide a more holistic view of molecular interactions and improve the relevance of downstream predictions.\\n- Refining the Evaluation Pipeline for Multimodal Data:\\nDeveloping a more sophisticated evaluation pipeline that better infuses multimodal data will be essential. For instance, metrics that capture cross-modality consistency and assess how well models leverage complementary information from multiple omics types can provide deeper insights into model performance. Additionally, incorporating metrics that account for the stochasticity and uncertainty inherent in biological systems can improve evaluation robustness.\\n- Developing Novel Multi-Omics Foundation Models:\\nLeveraging the insights gained from this benchmark, we aim to explore novel model architectures tailored for multi-omics data. These models could employ advanced techniques like attention-based integration and hierarchical representations of omics modalities. The benchmarking insights will guide the design of these models, ensuring they address the specific challenges and opportunities identified in multi-omics tasks.\\n\\nThese future directions aim to expand the scope and utility of the benchmark, driving the development of innovative methods and fostering deeper biological insights through more accurate and comprehensive modeling of multi-omics data.\"}", "{\"comment\": \"Thanks for one additional experiment related to model size. I will keep my (already increased) score, considering for a benchmark paper on different models, more rigorous and comprehensive experiment with in-depth understanding / insights / variation with respect to the underlying model being evaluated is important.\"}", "{\"title\": \"Answer to Reviewer 6UP5\", \"comment\": \"### **Question4: Explanation of The Term Frozen**\\n\\nBroadly, we froze the released weights of the model's backbone while training classification heads for all tasks across all models. For cross-omics tasks, such as fine-tuning protein language models (ESM2) on nucleotide sequence tasks (APA Isoform Prediction), both the word embedding layer and the classification heads were trained.\\nFollowing the experimental setups used in BEND, PEER and computer vision studies, we adopted a fine-tuning approach with the backbone frozen. This approach serves two purposes: \\n\\n- evaluating the quality of the representations learned during pretraining\\n- exploring whether fine-tuning on downstream tasks disrupts the knowledge acquired during pretraining.\\n\\nIn tasks like Contact Map Prediction and Thermostability Prediction, where DNABERT2 with a frozen backbone performed better than full-parameter fine-tuning, we attribute this to the latter potentially overwriting or forgetting knowledge gained during pretraining, suggesting that full-parameter fine-tuning may, under certain conditions, lead to catastrophic forgetting.\\n\\n\\n### **Question5: Detailed Preprocessing for Each Task**\\n\\nThank you for your advice! The detailed task creation process has been included in the manuscript `APPENDIX A.4` of the revised version in `green`.\\n\\n* Gene Expression\\n We adopt the data processing methodology from Xpresso[r1]. Human gene expression data comes from the Epigenomics Roadmap Consortium, which provides normalized RNA-seq values for protein-coding mRNAs across 56 tissues and cell lines.\\n \\n Due to the large number of parameters in biological language models and the memory limitations of A100 GPUs, our experiments show that trimming sequence lengths to 6000 bp ensures compatibility with all models for processing input sequences. By inputting consecutive 6000 bp nucleotide fragments from different positions in the processed sequences into the Xpresso model, we identify that the sequence indexed from position 7000 to 12999 (length 6000 bp) achieves optimal test performance. This segment contains the most information related to gene expression levels.\\n \\n For training, we use the 6000 bp nucleotide sequence indexed from position 7000 to 12999 as input and the expression data for 56 tissues as labels. The train, validation, and test dataset splits follow the methodology used in Xpresso.\\n\\n* Enhancer Activity Prediction\\n We follow the processing procedure described in [r2]. The data includes sequence information and transcriptional activity metrics for both Drosophila and humans, encompassing developmental and housekeeping transcriptional activity levels.\\n \\n We use downloaded sequences of 249 bp in length, along with `Dev_log2_enrichment_scaled` and `Hk_log2_enrichment_scaled`, which respectively represent developmental and housekeeping transcriptional activity information. The dataset is divided into training, validation, and test sets according to the method outlined in [r2].\\n \\n \\n* APA Isoform Prediction\\n The preparation for IPA isoform analysis begins by filtering raw sequencing reads from all MPRAs[r3] to retain only high-quality, full-length RNA sequences. These reads are grouped based on the randomized regions located upstream of the proximal polyadenylation site (pPAS), forming a dictionary of sequence variants for each library. To expand this dictionary, sequencing is also performed on the plasmid library, capturing members that lack expression of a distal isoform. RNA reads are then matched to dictionary entries by identifying the upstream region with the shortest Hamming distance.\\n\\n Polyadenylation cleavage sites are determined for each mapped read by detecting the presence of a Poly-A tail. The cleavage positions are recorded as vectors associated with individual sequence variants, including a specific position for reads mapping to non-random distal sites. The dataset generated from this process consists of a dictionary of distinct sequence variants paired with vectors of cleavage position counts. A final filtering step ensures data quality by discarding sequences supported by fewer than 10\\u201320 unique UMI RNA reads or those containing over 75\\\\% A-nucleotides within a 12\\u201320 bp region, which could indicate internal priming artifacts.\\n\\n[r1] Predicting mrna abundance directly from genomic sequence using deep convolutional neural networks. Cell Reports 2020.\\n\\n[r2] DeepSTARR predicts enhancer activity from DNA sequence and enables the de novo design of synthetic enhancers. Nature Genetics 2022.\\n\\n[r3] Integration of multiple epigenomic marks improves prediction of variant impact in saturation mutagenesis reporter assay. Human Mutation 2019.\"}", "{\"title\": \"Thank you for your feedback. To address your remaining concerns\", \"comment\": \"Thank you for your feedback. To address your remaining concerns, we provide the following responses:\\n\\n**1. Dataset Selection:**\\n\\nWe began by identifying key tasks in molecular biology involving DNA, RNA, and proteins and selected representative datasets for each task. Our task and dataset selection process is guided by community recognition, relying on peer-reviewed sources with real-world applications, rather than blindly following others' ideas without critically evaluating their validity.\\n\\n**2. Diversity of Datasets:**\", \"we_carefully_considered_dataset_diversity_by_ensuring\": [\"Coverage across all omics: DNA, RNA, and protein, as well as multi-omics interactions.\", \"A variety of task types: including sequence-wise regression, sequence-wise classification, residue-wise regression, and residue-wise classification.\", \"A wide range of data sizes: from ~700 to 41K samples across different datasets.\"], \"a_broad_and_reasonable_range_of_sequence_lengths\": \"from 23 to 6k.\\n- Datasets with excessively long sequences, such as the 200k-length CAGE profile prediction dataset used by Enformer, were excluded as they are unsuitable for evaluating RNA and protein language models. In the future, we plan to include multiple datasets for the same task type to further enhance diversity within individual tasks.\\n\\n**3. Baseline Method Selection:**\\n\\n- We selected baseline methods based on their demonstrated applicability and representativeness for specific tasks. For instance, in the gene expression prediction task, Xpresso serves as an appropriate baseline for sequence-wise regression tasks predicting bulk RNA-seq expression across 56 tissues and cell lines. In contrast, Enformer\\u2019s task involves predicting CAGE profiles through binned nucleotide-wise regression. Evaluating Enformer requires datasets that match its task, and using mismatched datasets would compromise fairness (please see `notes` below).\\n- This is analogous to computer vision benchmarks where segmentation tasks are divided into semantic segmentation and instance segmentation\\u2014methods designed for one task are not necessarily evaluated on the other, as their models and datasets are not aligned. While we recognize Enformer\\u2019s contributions to gene expression prediction and acknowledge that its model could be adapted for this task by modifying its head, we plan to include it in future work. \\n\\n**4. Computational Resource Considerations:**\\n\\nOur mention of computational resources primarily aims to clarify why we selected Xpresso's sequence-wise regression bulk RNA-seq dataset for the gene expression task instead of Enformer\\u2019s binned nucleotide-wise regression CAGE profile dataset.\\n\\nWe would greatly appreciate your evaluation and feedback.\\n\\n`notes`\\uff1a\\n- Model: Xpresso, Dataset: Bulk RNA-seq, Task: sequence-wise regression, Length: 6k\\n- Model: Enformer, Dataset: CAGE Profile, Task: binned nucleotide-wise regression, Length: 200k\"}", "{\"title\": \"Thank you for your feedback.\", \"comment\": \"Thank you for your thoughtful feedback and for your interest in the diversity of datasets in our benchmark, as well as for clarifying any potential conflicts of interest. We deeply value your rigorous evaluation and commitment to ensuring the quality of benchmarking analyses.\\n\\n- **Included Dataset and Diversity:** Regarding the overlap between housekeeping and developmental enhancers, we intentionally included these datasets to study two distinct types of enhancer activity. While these datasets may overlap, the aim is to understand the differences in their functional outputs under varying conditions. To further enhance dataset diversity in our benchmark, we have included additional datasets covering multiple cell types. For instance, in the Enhancer-Promoter Interaction Prediction task, we incorporated interaction scenarios from six distinct cell type datasets: `GM12878`, `HOVEC`, `Hela-S3`, `IMR90`, `K562`, and `NHEK`. Additionally, in this rebuttal, we have enriched our work with extra datasets for the task Alternative Polyadenylation, broadening the scope of isoform-level analysis with `HSPE1`, `SNHG6` and `WHAMMP2` datasets. Moreover, HSPE1 focuses on protein folding, SNHG6 addresses non-coding RNA functions, and WHAMMP2 targets protein interaction and structural dynamics, showcasing the broad functional spectrum.\\n\\n- **Benchmarking Precedents:** Based on precedents in the field, having multiple datasets for the same task is not a strict requirement for a benchmark. For instance, recently accepted benchmarks such as BEACON [r1] and DART-Eval [r2] do not include multiple datasets for every task. Similarly, other well-regarded benchmarks like BEND [r3], PEER [r4], and GUE [r5] have multiple datasets for only one or two tasks. Including multiple datasets for every task is often unnecessary and impractical, especially when carefully curated, representative datasets can already provide robust evaluations.\\n\\n- **Space for Future Improvements:** We wholeheartedly agree that benchmarking analyses can always be expanded and improved. While our current work lays a solid foundation by covering a wide range of tasks and datasets, we see this as an ongoing effort. Future iterations of our benchmark will incorporate more diverse datasets, refine task criteria, and address additional biological complexities to further strengthen its impact.\\n\\nWe hope this clarifies our approach and demonstrates the balance we have struck between dataset diversity and the practical constraints of benchmark design. We would also like to invite you to consider evaluating our contribution in advancing the study of biological language models across different omics (DNA, RNA, and Protein), as reviewer 6UP5 has highlighted. Thank you again for your engagement and constructive suggestions, which we believe will help us improve our work and its impact.\\n\\n[r1] BEACON: Benchmark for Comprehensive RNA Tasks and Language Models. NeurIPS 2024.\\n\\n[r2] DART-Eval: A Comprehensive DNA Language Model Evaluation Benchmark on Regulatory DNA. NeurIPS 2024.\\n\\n[r3] BEND: Benchmarking DNA Language Models on biologically meaningful tasks. ICLR 2024.\\n\\n[r4] PEER: A Comprehensive and Multi-Task Benchmark for Protein Sequence Understanding. NeurIPS 2022.\\n\\n[r5] Dnabert-2: Efficient foundation model and benchmark for multi-species genome. ICLR 2024.\"}", "{\"title\": \"Finetuning schemes and data nature/quality\", \"comment\": \"* APA Isoform Prediction\\n The preparation for IPA isoform analysis begins by filtering raw sequencing reads from all MPRAs[r3] to retain only high-quality, full-length RNA sequences. These reads are grouped based on the randomized regions located upstream of the proximal polyadenylation site (pPAS), forming a dictionary of sequence variants for each library. To expand this dictionary, sequencing is also performed on the plasmid library, capturing members that lack expression of a distal isoform. RNA reads are then matched to dictionary entries by identifying the upstream region with the shortest Hamming distance.\\n\\n Polyadenylation cleavage sites are determined for each mapped read by detecting the presence of a Poly-A tail. The cleavage positions are recorded as vectors associated with individual sequence variants, including a specific position for reads mapping to non-random distal sites. The dataset generated from this process consists of a dictionary of distinct sequence variants paired with vectors of cleavage position counts. A final filtering step ensures data quality by discarding sequences supported by fewer than 10\\u201320 unique UMI RNA reads or those containing over 75\\\\% A-nucleotides within a 12\\u201320 bp region, which could indicate internal priming artifacts.\\n\\nWe process data from 12 random 3' UTR libraries. 9 among the 12 libraries are used for training and 3 held out (the 3 held-out libraries were excluded from the current analysis). To construct a balanced test set, sequences from each library are first shuffled independently according to their read counts. These shuffled sequences are then merged using a round-robin approach, selecting one sequence from each library at a time in descending order of read count. This strategy ensures that the test set contains an even representation of high-read count sequences across all libraries. The remaining sequences are appended to the beginning of the combined library, and the training set is further shuffled to enhance randomness. For benchmarking purposes, the top 10\\\\% of high-read count sequences are prioritized. Among these, the most abundantly expressed sequences are selected for testing, ensuring a high-quality, balanced dataset for training, validation, and evaluation.\\n\\n* Programmable RNA Switches\\n We adopt the data generation pipeline described in [r4]. A toehold-switch library comprising 244,000 potential trigger sequences is designed and synthesized, covering the complete genomes of 23 pathogenic viruses, the entire coding regions of 906 human transcription factors, and approximately 10,000 random sequences. Using this synthesized oligo pool, two construct libraries are created to represent the ON and OFF states, and both are transformed into BL21 E. coli. The OFF library includes toehold-switch constructs without triggers, while the ON library contains identical toeholds paired with complementary triggers fused to their respective switches.\\n\\n The libraries are sorted into four bins using fluorescence-activated cell sorting (FACS), and the variants in each bin are quantified through next-generation sequencing (NGS) to determine their fluorescence distributions. After quality control, the toehold-switch library consists of 109,067 ON-state measurements, 163,967 OFF-state measurements, and 91,534 ON/OFF paired ratios, where both states are characterized for each switch. ON and OFF data are normalized to a scale of 0 to 1, with ON/OFF ratios normalized to a range of -1 to 1. Following [r4], a stringent quality control process is applied to eliminate artifacts and ensure data reliability. The quality control (QC) framework includes five levels: QC1, QC2, QC3, QC4 and QC5, where QC1 represents the lowest quality and QC5 the highest. Datasets above QC2 are utilized for training, while QC5 is reserved for testing.\\n\\n* Secondary Structure Prediction\\n We follow the preprocessing steps outlined in the bpRNA-1m dataset [r5].To reduce sequence redundancy and improve dataset diversity, we implement an 80\\\\% sequence-identity threshold and cap the maximum sequence length at 500 nucleotides, following protocols described in the referenced studies. These measures are essential for minimizing overfitting and ensuring that the models are trained on a wide range of genetically diverse samples.\", \"the_dataset_is_divided_into_three_subsets\": \"a training set (TR0), a validation set (VL0), and a test set (TS0). The splitting process is randomized to eliminate potential biases and ensure an unbiased evaluation of the model\\u2019s performance.\\n\\n[r3] Integration of multiple epigenomic marks improves prediction of variant impact in saturation mutagenesis reporter assay. Human Mutation 2019.\\n\\n[r4] A deep learning approach to programmable RNA switches. Nature Communications 2020.\\n\\n[r5] bpRNA: large-scale automated annotation and analysis of RNA secondary structure. Nucleic Acids Research 2018.\"}", "{\"title\": \"Answer to Reviewer 6UP5\", \"comment\": [\"The experimental results demonstrate:\", \"The fine-tuning of the frozen pretrained Omics Language Model exhibits a low standard deviation across multiple random seeds, and the mean of the multi-seed experiments closely matches the results of single-seed experiments.\", \"For the naive supervised models, the multi-seed mean results align closely with single-seed results across all experiments, except for the ResNet experiment.\"], \"regarding_the_diverse_and_complex_tasks_covered_in_the_paper\": \"Our work spans three categories\\u2014single-omics, cross-molecular, and multi-molecular\\u2014comprising a total of 17 tasks. These tasks address a broad range of objectives, including structure, function, and engineering, and encompass various task types, such as single-label regression, multi-label regression, and multi-label classification. Additionally, the size of the test sets varies significantly, ranging from 40 to 49,755 samples. Given this complexity, we adhered to the evaluation metrics recommended in the original dataset publications to assess model performance comprehensively.\"}", "{\"title\": \"Answer to Reviewer 8ScB\", \"comment\": [\"### **Weakness4: Explanation of Percentage**\", \"We followed the representation in BEACON and used percentage (%) to indicate that the value is scaled by a factor of one hundred. In Table 3, PCC, SCC, and R\\u00b2 values are multiplied by 100 and reported as percentages (%) for readability. Negative percentages indicate negative values of the original metric scaled by a factor of 100, reflecting the same interpretation as their unscaled counterparts.\", \"### **Weakness5: Justification for the Task-Specific Method Selection**\", \"We ensure that the task-specific methods chosen are the most relevant and effective. Although Enformer and Borzoi also target gene expression prediction, its task differs significantly from the gene expression task in our study due to the large and computationally complex cost of the task.\", \"The input sequence lengths for these tasks vary substantially: Enformer operates on sequences of length 200k, whereas Xpresso\\u2019s gene expression tasks use sequences of length 6k. The 200k sequence length exceeds the processing capability of RNA and protein language models, making Enformer unsuitable for inclusion as a comparative method for gene expression prediction tasks in this study.\", \"Additionally, Enformer focuses on predicting expression levels across tracks and buckets based on CAGE data, which requires a highly complex head architecture following the language model. This approach demands significantly higher computational costs for full-parameter fine-tuning.\", \"### **Weakness6: Emphasis on Other Key Information**\", \"Thanks for the suggestion, we've updated some of the other key information below, highlighted in the manuscript `ABSTRACT` in `red`.\", \"We observed that DNA, RNA, and protein models can be applied to tasks across different omics by leveraging initialized embeddings, with protein models demonstrating superior performance across various omics.\", \"Through the evaluation of multi-omics tasks, we identified significant gaps in the capabilities of current models to address these challenges, highlighting substantial opportunities to enhance multi-omics integration and improve overall performance.\", \"### **Weakness7: Justifications for Prevention of Information Leakage**\", \"For the unsupervised pre-training, since most models are pre-trained on sequences without corresponding labels, we do not need to worry about data overlap between the pre-training and downstream tasks.\", \"For downstream tasks, we followed established, peer-reviewed data processing pipelines for tasks like secondary structure prediction and APA isoform prediction (see details in Answer to Reviewer 6UP5 `Question5: Detailed Preprocessing for Each Task`). Additionally, after preprocessing, we double-check to ensure there is no overlap between the training and testing datasets.\"]}", "{\"title\": \"Answer to Reviewer z8kJ\", \"comment\": \"Thank you for the insightful feedback. We appreciate the opportunity to clarify and refine our work based on your review.\\n\\n### **Weakness 1: Justifications for Methodological Novelty**\\n\\nWe appreciate the reviewer\\u2019s observation regarding methodological novelty and would like to clarify the intent and scope of our work. We believe that a benchmark can make substantial contributions even without proposing a new method. Similar to benchmark studies such as PEER [r1] and ProteinGym [r2], COMET offers a novel analytical perspective to explore the capabilities of existing biological language models\\u2014particularly their ability to tackle tasks beyond the boundaries of their respective omics domains. Our standardized datasets and benchmarks ensure fair evaluation of architectures, algorithms, and training strategies and reveal model strengths and weaknesses that are overlooked in their original studies. As far as we know, our work is the first to address the gap in multi-omics studies by curating datasets and evaluating tasks, providing a strong foundation for further research.\\n\\nThis paper is designed as an unbiased benchmark study to evaluate existing methods rather than using the benchmark to promote our own methods. Our goal is to establish a standardized benchmark that enables researchers to assess state-of-the-art methods objectively and as a foundation for future methodological development. Developing new methods will be the focus of future work and will be based on the insights gained from this study.\\n\\n[r1] PEER: A Comprehensive and Multi-Task Benchmark for Protein Sequence Understanding. NeurIPS 2022.\\n\\n[r2] ProteinGym: Large-Scale Benchmarks for Protein Design and Fitness Prediction. NeurIPS 2023.\\n### **Weakness 2: Explanation of Findings** \\nWe appreciate your valuable feedback. Our findings encompass a range of novel insights, which we categorize into two main types:\\n\\n1. **Novel discoveries** (these insights can inspire researchers to develop new algorithms and conduct experimental analyses):\\n\\n- Randomly initialized vocabulary embeddings reveal cross-omics knowledge learned during pre-training.\\n- Protein models enhance predictive performance in DNA and RNA regulatory tasks.\\n- DNA models demonstrate potential in protein and RNA tasks, reflecting DNA's foundational role in the central dogma.\\n- Single-omics models achieve competitive performance in their respective tasks, particularly structural tasks.\\n- Multi-omics models outperform single-omics models on multi-molecular tasks.\\n- Multi-molecular tasks remain significantly challenging and require further exploration.\\n2. **Findings aligned with existing work but expanded to broader contexts**:\\n\\n- CDS models demonstrate competitive performance on codon sequence data.\\n- Nucleotide models have the potential to rival CDS models.\\n(We extend findings similar to those in CaLM and \\\"Are Genomic Language Models All You Need?\\\" to more foundational models, including RNA models, making these discoveries more broadly applicable.)\\n\\n\\nIn response, we have reflected on your suggestions and revised the manuscript to enhance the depth and clarity of our findings. Key updates, highlighted in the manuscript `RESULTS` in `blue`, are as follows:\\n- Enhanced Results Structure: We have reorganized and refined the results section to present experimental outcomes more logically and concisely. This includes adjusting the sequence of conclusions and improving the flow of their presentation.\\n- Detailed Analysis: Additional explanations and context have been provided for the experimental results. For instance, we now discuss cross-omics adaptability, emphasizing how multi-omics models capture intricate molecular features by leveraging nucleotide, codon, and protein-specific representations. We also highlight the role of tailored pretraining on omics-specific data in achieving success.\\n- Deeper Insights: To make conclusions more insightful and actionable, we analyzed how nucleotide models implicitly learn codon patterns and adapt to cross-molecular tasks, demonstrating the potential of unified multi-omics representations. This insight underlines the need for architectural innovations and task-specific adaptations, particularly for tasks requiring highly specialized knowledge.\\n- Constructive Summaries: We added summary sections after the experimental analyses, offering clear and constructive takeaways. For example, the potential of multi-omics models to outperform task-specific models in capturing cross-omics dependencies is discussed, alongside the challenges faced by models like LucaOne in highly specialized tasks.\\n\\nThese revisions aim to enhance the manuscript's impact by providing deeper insights, actionable conclusions, and a clearer understanding of the results. Thank you for encouraging us to improve the quality of our work.\"}", "{\"title\": \"Answer to Reviewer z8kJ\", \"comment\": \"### **Question 4: Case Studies of Biology Benchmarks for Practical Applications**\\nBenchmarks in biology have demonstrated their utility in uncovering meaningful biological insights by systematically evaluating different deep learning models.\\n\\n- For example, in gene expression prediction, studies by Sasse et al. [r1] and Huang et al. [r2] exposed generalizability issues in models like Enformer, Basenji2, ExPecto, and Xpresso, which underperformed in vivo. Their analysis revealed that Enformer overly relied on specific SNVs for predicting expression levels. Similarly, Khan et al.\\u2019s [r3] independent evaluation of scBERT demonstrated that while it generalizes well to new datasets, its performance is highly sensitive to class imbalance.\\n- For multi-omics tasks, our benchmark can facilitate research in codon optimisation to boost protein yields [r4] and vaccine design [r5] in the real world. And it can also facilitate drug discovery, such as siRNA drugs [r6] for gene silencing and the design of more affinitive antibodies [r7].\\n\\nThese benchmarks have been instrumental in identifying model strengths and limitations, providing actionable insights that drive improvements in architecture and training strategies. However, the absence of standardized benchmarks in multi-omics research continues to hinder the advancement in this field. Aiming to address this gap, our multi-omics benchmark provides a crucial foundation that systematically assesses existing models, uncovering their potential and biological insights across DNA, RNA and proteins.\\n\\n\\n[r1] Benchmarking of deep neural networks for predicting personal gene expression from DNA sequence highlights shortcomings. Nature Genetics 2023. \\n\\n[r2] Personal transcriptome variation is poorly explained by current genomic deep learning models. Nature Genetics 2023. \\n\\n[r3] Reusability report: Learning the transcriptional grammar in single-cell RNA-sequencing data using transformers. Nature Machine Intelligence 2023.\\n\\n[r4] High-throughput 5\\u2032 UTR engineering for enhanced protein production in non-viral gene therapies. Nature Communications 2021. \\n\\n[r5] Algorithm for optimized mRNA design improves stability and immunogenicity. Nature 2023.\\n\\n[r6] On the art of identifying effective and specific siRNAs. Nature Methods 2006.\\n\\n[r7] AIntibody: an experimentally validated in silico antibody discovery design challenge. Nature Biotechnology 2024.\"}", "{\"title\": \"Answer to Reviewer 6UP5\", \"comment\": \"* Enhancer-Promoter Interaction Prediction\\n We follow the processing of [r8]. We derive the dataset from EPIANN[r9], which includes six cell lines, GM12878, HeLa-S3, IMR90, K562, HUVEC and NHEK. To address the challenge of data imbalance, EPIANN enhanced the representation of positive samples by incorporating the upstream and downstream regions of enhancers. This approach expanded the dataset to include relevant genomic regions by defining extended windows of 3 kbp around enhancers and 2 kbp around promoters, ensuring a more comprehensive capture of the surrounding regulatory landscape.\\n\\n* siRNA Efficiency Prediction\\n We get the dataset from SAIS[r10]. We use the information of the reference sequence of the target gene, the sense sequence of the target gene, the sense sequence of modified siRNA and the remaining percentage of mRNA after the experiment named `gene_target_seq`, `siRNA_sense_seq`, `modified_siRNA_sense_seq`, and `mRNA_remaining_pct` in dataset from SAIS, respectively.\\n\\n* Antibody-Antigen Neutralizability Prediction\\n We follow [r11], which provides a minimal dataset specifically designed for this prediction task. This task is based on two datasets: CATNAP[r12], which focuses on HIV, and CoVAbDab[r13], which pertains to SARS-CoV-2.\\n HIV data is sourced from CATNAP in the Los Alamos HIV Database. Antibody (Ab) and antigen (Ag) sequences are extracted, curated to remove duplicates and missing values, and classified as neutralizing (IC\\u2085\\u2080 < 10 \\u03bcg/ml) or non-neutralizing (IC\\u2085\\u2080 \\u2265 10 \\u03bcg/ml). Seen and unseen Abs are split, ensuring no overlap between training, validation, and testing sets by excluding similar pairs (BlastP \\u2265 90%). Training is conducted on seen Abs, with unseen Abs used for evaluation across 20 random dataset splits.\\n SARS-CoV-2 Data is collected from CoVAbDab and includes pairwise Ab\\u2013Ag instances across variants like Alpha, Beta, Delta, and Omicron. Five sequences per variant and 11 for Omicron are used. Omicron is treated as an unseen Ag, excluded from training but incorporated in relation graphs for transductive learning, enabling the identification of broad-spectrum Abs.\\n\\n* RNA-Protein Interaction Prediction\\n The dataset is sourced from NPInter2.0[r17], NPInter2.0\\\\_lncRNA[r18], and RPI7317[r19]. The sequences of ncRNAs and proteins are obtained from the NONCODE database[r20], Gencode database[r22], and UniProt database[r21]. The NPInter database integrates new datasets from literature and related resources, with a major focus on data published in recent years. Through a systematic PubMed search using keywords related to RNA interactions, 1270 relevant articles were identified. Verified or processed interaction data were manually extracted, while raw sequencing data were excluded. Binding sites were compared against RefSeq coding genes to remove overlaps with coding regions and cross-checked with NONCODE for ncRNA references. Valid interactions were annotated with standardized IDs (UniProt, RefSeq, NONCODE, etc.) depending on the molecule type.\\n Data from external resources like LncRNADisease[r23], which curated 478 experimentally supported lncRNA interactions, were integrated and subjected to the same annotation pipeline. The combined dataset underwent redundancy elimination, aggregating overlapping interactions into single records. NPInter v2.0 thus provides a comprehensive, curated multilevel snapshot of RNA-related interactions.\\n\\n\\n\\n[r8] Predicting enhancer-promoter interactions by deep learning and matching heuristic. Briefings in Bioinformatics 2021.\\n\\n[r9] Modeling enhancer-promoter interactions with attention-based neural networks. bioRxiv 2017.\\n\\n[r10] http://competition.sais.com.cn/competitionDetail/532230/format\\n\\n[r11] Predicting unseen antibodies' neutralizability via adaptive graph neural networks. Nature Machine Intelligence 2022.\\n\\n[r12] CATNAP: a tool to compile, analyze and tally neutralizing antibody panels. Nucleic Acids Research 2015.\\n\\n[r13] CoV-AbDab: the coronavirus antibody database. Bioinformatics 2021.\\n\\n[r17] NPInter v2. 0: an updated database of ncRNA interactions. Nucleic acids research 2014.\\n\\n[r18] The bipartite network projection-recommended algorithm for predicting long non-coding RNA-protein interactions. Molecular Therapy-Nucleic Acids 2018.\\n\\n[r19] LPI-BLS: Predicting lncRNA--protein interactions with a broad learning system-based stacked ensemble classifier. Neurocomputing 2019.\\n\\n[r20] NONCODE v3. 0: integrative annotation of long noncoding RNAs. Nucleic acids research 2012.\\n\\n[r21] Update on activities at the Universal Protein Resource (UniProt) in 2013. Nucleic acids research 2012.\\n\\n[r22] GENCODE reference annotation for the human and mouse genomes. Nucleic acids research 2019.\\n\\n[r23] LncRNADisease: a database for long-non-coding RNA-associated diseases. Nucleic acids research 2012.\"}", "{\"comment\": \"Great work thanks for the detailed explanations and re-runs I have decided to increase the score. I think that the multiple runs presented and the more thorough explanations of the models and tasks make this work more valuable.\"}", "{\"title\": \"Answer to Reviewer 6UP5\", \"comment\": \"* CRISPR Off-Target Prediction\\n Following [r14], we get the off-target dataset, which comprises two different cell types containing 30 sgRNAs. For all 30 sgRNAs, approximately 160,000 possible off-target sites across the entire genome are obtained. Off-target sites are annotated and standardized using the targeting cutting frequency (indel frequency) detected by different off-target detection methods.\\n\\n* DNA-Protein Folding Prediction\\n We query the PDB database using the filenames provided by deepPBD[r15] to obtain the mmCIF files of DNA-protein complexes and get 428 mmCIF files. From the mmCIF files, we extract the coordinates, sequences, and certain bonding information of both DNA and proteins. When encountering modified residues or nucleotides in the mmCIF files, we follow the AlphaFold3[r16] and map these residues or nucleotides to standard amino acids or DNA sequences using SCOP. We set the DNA-protein interface distance threshold to 5\\u00c5. Based on this threshold, we derive the DNA-protein interface information. Subsequently, we match the DNA and protein duplex information using the DNA-protein interface and sequence information. Finally, we obtained 683 DNA-protein complexes.\\n\\n[r14] DeepCRISPR: optimized CRISPR guide RNA design by deep learning. Genome Biology 2018.\\n\\n[r15] Geometric deep learning of protein--DNA binding specificity. Nature Methods 2024.\\n\\n[r16] Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature 2024.\"}", "{\"title\": \"Could you kindly share what your remaining concerns are?\", \"comment\": \"Could you kindly share what your remaining concerns are? We would be happy to address them further to ensure our work meets your expectations.\"}", "{\"title\": \"Finetuning schemes and data nature/quality\", \"comment\": \"* DNA-Protein Folding Prediction We query the PDB database using the filenames provided by deepPBD[r15] to obtain the mmCIF files of DNA-protein complexes and get 428 mmCIF files. From the mmCIF files, we extract the coordinates, sequences, and certain bonding information of both DNA and proteins. When encountering modified residues or nucleotides in the mmCIF files, we follow the AlphaFold3[r16] and map these residues or nucleotides to standard amino acids or DNA sequences using SCOP. We set the DNA-protein interface distance threshold to 5\\u00c5. Based on this threshold, we derive the DNA-protein interface information. Subsequently, we match the DNA and protein duplex information using the DNA-protein interface and sequence information. Finally, we obtained 683 DNA-protein complexes.\\n\\n* RNA-Protein Interaction Prediction\\n The dataset is sourced from NPInter2.0[r17], NPInter2.0\\\\_lncRNA[r18], and RPI7317[r19]. The sequences of ncRNAs and proteins are obtained from the NONCODE database[r20], Gencode database[r22], and UniProt database[r21]. The NPInter database integrates new datasets from literature and related resources, with a major focus on data published in recent years. Through a systematic PubMed search using keywords related to RNA interactions, 1270 relevant articles were identified. Verified or processed interaction data were manually extracted, while raw sequencing data were excluded. Binding sites were compared against RefSeq coding genes to remove overlaps with coding regions and cross-checked with NONCODE for ncRNA references. Valid interactions were annotated with standardized IDs (UniProt, RefSeq, NONCODE, etc.) depending on the molecule type.\\n Data from external resources like LncRNADisease[r23], which curated 478 experimentally supported lncRNA interactions, were integrated and subjected to the same annotation pipeline. The combined dataset underwent redundancy elimination, aggregating overlapping interactions into single records. NPInter v2.0 thus provides a comprehensive, curated multilevel snapshot of RNA-related interactions.\\n\\n\\n[r15] Geometric deep learning of protein--DNA binding specificity. Nature Methods 2024.\\n\\n[r16] Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature 2024.\\n\\n[r17] NPInter v2. 0: an updated database of ncRNA interactions. Nucleic acids research 2014.\\n\\n[r18] The bipartite network projection-recommended algorithm for predicting long non-coding RNA-protein interactions. Molecular Therapy-Nucleic Acids 2018.\\n\\n[r19] LPI-BLS: Predicting lncRNA--protein interactions with a broad learning system-based stacked ensemble classifier. Neurocomputing 2019.\\n\\n[r20] NONCODE v3. 0: integrative annotation of long noncoding RNAs. Nucleic acids research 2012.\\n\\n[r21] Update on activities at the Universal Protein Resource (UniProt) in 2013. Nucleic acids research 2012.\\n\\n[r22] GENCODE reference annotation for the human and mouse genomes. Nucleic acids research 2019.\\n\\n[r23] LncRNADisease: a database for long-non-coding RNA-associated diseases. Nucleic acids research 2012.\"}", "{\"summary\": \"This paper presents the comprehensive multi-omics benchmark COMET (Benchmark for Biological Comprehensive Multi-omics Evaluation Tasks and Language Models), created to assess models across single-omics, cross-omics, and multi-omics tasks. The goal of this benchmark is to identify key challenges in multi-omics research and to guide future efforts, ultimately fostering advancements in understanding biological processes through the integrated analysis of diverse omics data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper curated a collection of datasets and tasks covering structural and functional aspects in DNA, RNA, and proteins, including tasks that span multiple omics levels. This paper further evaluated a variety of FMs for respective Bio-modalities, offering insights into their performance, especially with respect to cross-modality applications.\", \"weaknesses\": \"The introduction to the various benchmark tasks is thorough. However, it would be advantageous to provide more detailed information about the AIML models being evaluated, particularly regarding their potential strengths and weaknesses for specific tasks. Additionally, it is recommended to explain other aspects of model training and evaluation, such as the criteria for choosing between LoRA fine-tuning and full fine-tuning for each model, and the rationale behind selecting specific metrics for evaluating each model.\\n\\nResults interpretation is rather brief and sometimes confusing. \\n\\n\\u2022\\tWhat is the key takeaway from all the experiments when comparing Literature SOTA to Pre-trained FMs? Specifically, Literature SOTA outperformed in all protein-related tasks listed in Table 3\\u2014what insights can be drawn from this? Additionally, no RNA-based FMs achieved top performance in any RNA tasks\\u2014what insights can be provided here? Could it be related to how much data was used in pre-training the RNA-based FMs? The summary text in section 5.2 does not accurately reflect the data presented in Table 3, which is causing confusion.\\n\\n\\u2022\\tFor 5.3, CROSS-MOLECULAR BENCHMARK RESULTS, why the performance on EC is so much worse for CaLM after refinement, which is typically not the case if refinement is properly done? Also for EC task, using condon sequence gets noticeably worse results compared to its protein sequence counterpart. What is special about EC task compared to other tasks like Beta-Lac and Flu, which might contribute to this difference? \\n\\n\\u2022\\tFor 5.4 MULTI-MOLECULAR BENCHMARK RESULTS, In contrast to single-molecular tasks, for multi-molecular tasks, Literature SOTA still dominantly work better compared to multi-omics models or combination of two single-omics models. It would be important to elaborate the possible limitations in the current implementation of the multi-omics models that leads to this contrast and inferior performance.\\n\\nThe related work of paper can be strengthened and some of the claims can be formulated in the appropriate context of existing works. For example, the paper claims to be the first to establish a benchmark for \\\"compiling tasks and data involving cross-molecules and multi-molecules\\\" and \\\"evaluate existing foundation models for DNA, RNA, and proteins, as well as multiomics approaches. We conduct experiments with fully-finetuned or frozen models.\\\" Recent works such as Prakash, Moskalev et al., 2024 (Bridging biomolecular modalities for knowledge transfer in bio-language models) and Boshar et al., 2024 (Are Genomic Language Models All You Need?) should be cited as laying the foundations for these ideas and uncovering the link to central dogma which the paper claims as their contribution. \\n\\nSome of the conclusions have already been derived in prior works such as \\\"CDS model demonstrates competitive performance on codon sequence data\\\" has been reported already in CaLM (Outerial & Deane, 2024)\", \"questions\": \"Provided in Weakness section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer to Reviewer 6UP5\", \"comment\": \"### **Question2: Criteria For Selecting Tasks**\\n\\nWe curated a comprehensive collection of tasks encompassing diverse molecular types and enlisted evaluations from several biology professors and PhD students. From these assessments, we selected a representative subset of tasks to establish the first multi-omics benchmark spanning DNA, RNA, and proteins.\\n\\n- As shown in Table 1 of our paper, the tasks were sourced from high-impact conferences, journals, and competitions, emphasizing literature with high citation counts. Many of these tasks have already been used to evaluate the performance of biological language models specific to their respective omics. For instance, Gene Expression data originates from Cell Reports, Enhancer Activity Prediction from Nature Genetics, and APA Isoform Prediction from Cell. Similarly, tasks such as Programmable RNA Switches and Secondary Structure Prediction derive from Nature Communications and Nucleic Acids Research, respectively, while others like Thermostability Prediction and Contact Map Prediction are sourced from NeurIPS and BMC Bioinformatics. The selected tasks span both single-omics and cross-omics domains and broadly encompass the aspects of structure, function, and engineering.\\n\\n- Notably, for DNA tasks, we included Gene Expression Prediction, which forecasts the cross-tissue expression levels of genes and transcription factors (TFs), shedding light on regulatory networks underlying cell states and tissue-specific genomic functions. Enhancer Activity Prediction, on the other hand, analyzes DNA sequences to predict enhancer activity on specific promoters, revealing how regulatory signals drive transcriptional specificity in different cell types. These tasks also vary significantly in sequence lengths\\u2014Gene Expression tasks use sequences of 6000 bp, while Enhancer Activity Prediction involves sequences of 249 bp for evaluation of model performance across varying DNA sequence lengths. For the other expression dataset, PanglaoDB focuses on cell type identification and classification in single-cell RNA sequencing (scRNA-seq) datasets and provides curated lists of marker genes for specific cell types across tissues in humans and mice. However, it is mainly used in single-cell research with non-sequence data, for instance, scBERT [r1]. In our study, the gene expression task utilizes the Xpresso dataset from the Epigenomics Roadmap Consortium [r2], which is at the bulk level and emphasizes the regulatory effects of cell type-specific non-coding regions on gene expression, as well as inputs into the model as the sequence data.\\n\\n- Looking ahead, we aim to expand the benchmark by incorporating additional high-impact tasks that span multiple omics and broaden the scope of structural and functional predictions, driving innovation in bioinformatics and computational biology.\\n\\n[r1] scBERT as a large-scale pretrained deep language model for cell type annotation of single-cell RNA-seq data. Nature Machine Intelligence 2022.\\n\\n[r2] Integrative analysis of 111 reference human epigenomes. Nature 2015.\"}", "{\"title\": \"Thank you for your response. We want to address your remaining concerns\", \"comment\": \"Thank you for your response. We deeply appreciate your feedback, and your insights will be valuable in helping us improve our work. We are genuinely committed to addressing your concerns and engaging in meaningful discussions to resolve them.\"}", "{\"title\": \"Thanks for you reply.\", \"comment\": \"Thanks for you reply. I keep my scores as I do not see more datasets for one specific task as a requirement of data diversity for a benchmarking paper. You can focus on addressing others' opinions to increase the probability of raising the score.\"}", "{\"comment\": \"Thanks for your response which clarifies some of my concerns. I will keep my score.\"}", "{\"title\": \"Answer to Reviewer TFBg\", \"comment\": \"### **Weakness10: More Appropriate Context of Existing Works**\\n\\nThank you for your feedback and suggestions. We have improved the context and introduction of the relevant work in the manuscript `RELATED WORK` in `blue`.\\n\\nRecent studies, such as \\\"Are Genomic Language Models All You Need?\\\" [r1] and \\\"Bridging Biomolecular Modalities for Knowledge Transfer in Bio-Language Models\\\" [r2], have explored the idea of transferring pre-trained biological models to other omics domains. Researching multi-omics is a cutting-edge and popular topic in this field, aiming to integrate and utilize information from various types of biological data to gain deeper insights into complex biological systems.\\n\\n- We have already included \\\"Are Genomic Language Models All You Need?\\\" in the related works section of our initial manuscript and greatly value its contributions.\\n- Although our work (submission for ICLR deadline on October 1, 2024) predates the work \\\"Bridging Biomolecular Modalities for Knowledge Transfer in Bio-Language Models\\\" (bioRxiv submission on October 17, 2024), we will follow your kind suggestion and incorporate this work into the revised version of our manuscript.\\n \\nTo make our claim more clear, we have reformulated our contribution as: \\\"We present the first benchmark that encompasses single-omics tasks, cross-omics tasks, and multi-omics tasks spanning DNA, RNA, and protein sequences.\\\" By establishing this benchmark, we hope to provide a standardized platform for comparing and improving the capabilities of multi-omics models, thereby promoting more comprehensive and accurate biological research.\\n\\n[r1] Are genomic language models all you need? exploring genomic language models on protein downstream tasks. Bioinformatics 2024.\\n\\n[r2] Bridging biomolecular modalities for knowledge transfer in bio-language models. bioRxiv 2024.\\n\\n### **Weakness11: Similar Findings with CaLM**\\n\\n- CaLM is an outstanding contribution to the field, and we recognize and highly commend its work. In the process of establishing a benchmark for multi-omics frameworks, we arrived at the same conclusion as CaLM: \\\"CDS model demonstrates competitive performance on codon sequence data.\\\" \\n- The study \\\"Are genomic language models all you need? Exploring genomic language models on protein downstream tasks\\\" [r1] reached similar conclusions after CaLM and was published in Bioinformatics. Moreover, we further explored the potential of nucleotide models, including RNA models, to excel in protein-related tasks, making this finding more universally applicable. This highlights how our benchmark not only reinforces such discoveries but also drives impactful advancements within the community\\u2014just one of the many contributions our benchmark offers.\\n\\n[r1] Are genomic language models all you need? exploring genomic language models on protein downstream tasks. Bioinformatics 2024.\"}", "{\"comment\": \"Author's rebuttal and editing of the manuscript partially addressed my questions, in experiment setup, validation, and interpretation. To draw more insightful conclusion, a more rigorous experiments (e.g. FM models with different/comparable sizes, consistent finetuning schemes, data nature/quality of evaluated tasks and their influence etc) are desired, but may not be feasible during the limited rebuttal timeframe. I have increased my score to 5.\"}", "{\"comment\": \"Thank you for addressing this concern. This is hard work indeed. I would suggest that due to a large number of numbers, you can focus the reader by marking the values where the sd is of the same scale as the mean, such as the RNA-FM or mark the best performance (if applicable, given the sd). Again great work.\"}" ] }
C7ffKahGty
A Kernel Perspective on Training-Free Few-Shot Adaptation of Large Vision-Language Models
[ "Yassir Bendou", "Amine Ouasfi", "Vincent Gripon", "Adnane Boukhayma" ]
The growing popularity of Contrastive Language-Image Pretraining (CLIP) has led to its widespread application in various visual downstream tasks. To enhance CLIP's effectiveness, efficient few-shot adaptation techniques have been widely adopted. Among these approaches, training-free methods, particularly caching methods exemplified by Tip-Adapter, have gained attention for their lightweight adaptation without the need for additional fine-tuning. In this paper, we revisit Tip-Adapter from a kernel perspective, showing that caching methods function as local adapters and are connected to a well-established kernel literature. Leveraging this insight, we offer a theoretical understanding of how these methods operate and suggest multiple avenues for enhancing over the Tip-Adapter baseline. Notably, our analysis shows the importance of incorporating global information in local adapters. Therefore, we subsequently propose a global method that learns a proximal regularizer in a reproducing kernel Hilbert space (RKHS) using CLIP as a base learner. Our method, that we call ProKeR (Proximal Kernel ridge Regression), has a closed form solution and achieves state-of-the-art performance across 11 datasets in the standard few-shot adaptation benchmark.
[ "Few-shot Learning", "Vision-Language", "CLIP", "Efficient adaptation" ]
https://openreview.net/pdf?id=C7ffKahGty
https://openreview.net/forum?id=C7ffKahGty
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yE57VezEql", "eyMdrCZz5c", "Y0TwJJiQcH", "MvlRVRr9Ui", "4uqCh0pmve" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730356900961, 1729913276789, 1730762048729, 1730384141970, 1731657500644 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11906/Reviewer_hUi6" ], [ "ICLR.cc/2025/Conference/Submission11906/Reviewer_o6Ez" ], [ "ICLR.cc/2025/Conference/Submission11906/Reviewer_6W9D" ], [ "ICLR.cc/2025/Conference/Submission11906/Reviewer_7dUy" ], [ "ICLR.cc/2025/Conference/Submission11906/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper explores training-free methods like Tip-Adapter from a kernel perspective, providing theoretical analysis and algorithmic innovations. By categorizing caching methods as Local adapters, the paper proposes a new global adapter that learns a proximal regularizer in a reproducing kernel Hilbert space. Experiments are conducted on standard benchmarks.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper provides a theoretical explanation from a kernel perspective for methods like Tip-Adapter, validating the effectiveness of existing approaches.\\n2. Based on this theoretical framework, a new algorithmic design that incorporates global information is proposed.\\n3. There are improved results on standard benchmarks reported, illustrating the practical efficacy of the proposed method.\", \"weaknesses\": \"1. The citation format in the paper does not align with the ICLR template; the paper uses numerical references, whereas ICLR prefers author-name citations (e.g., CLIP by Radford et al., 2021).\\n2. The relationship between equations (2) and (3) is unclear. Equation (3) includes a denominator that is absent in equation (2); how should these be corresponded and understood?\\n3. The concept of \\\"global\\\" as opposed to \\\"local\\\" is confusing in this context. Familiar Tip-Adapter methods use weighted few-shot samples to predict test sample outcomes, utilizing all training samples, which seems global in nature. What explicit meanings are assigned to \\\"global\\\" and \\\"local\\\" in this paper?\\n4. The focus of the paper on training-free CLIP adaptation could benefit from a discussion of recent related works such as [1].\\n\\n[1]Dual memory networks: A versatile adaptation approach for vision-language models, CVPR2024\", \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a kernel-based method to understand and enhance the training-free adaptation methods (Tip-adapter). The authors provide a novel analysis of the Tip-Adapter method from a kernel perspective and identify that Tip-Adapter functions as a modified Nadaraya-Watson estimator. Then, the authors propose ProKeR and RKHS proximal regularizers. Extensive experiments are conducted and the results validate the effectiveness of the proposed ideas.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The overall presentation is clear and easy to follow.\", \"This paper analyses Tip-Adapter from the kernel-based perspective and provides a novel theoretical foundation, which provides some insights for understanding training-free adapting methods.\", \"Based on the understanding, the authors further propose ProKeR, which leverages the RKHS framework to introduce global regularization, addressing biases present in local nonparametric methods like Tip-Adapter.\", \"ProKeR achieves the state-of-the-art for training-free few-shot adaptation under two settings.\"], \"weaknesses\": [\"Reproducibility: Authors should report the values of the hyperparameters in the implementation part. Moreover, in CoOp's setting, the ImageNet dataset has no validation dataset, how do authors select hyper-parameter?\", \"What is validation shots? It is validation set or shots from validation set? If it is validation set, it is better to name it as validation set to avoid confusion.\", \"The implementation part missed many details. For example, did authors follow CoOp to run 3 times for each experiment and report the average results? How did authors use data augmentation? Since authors did not provide code, they should clarify these clearly for reproducibility.\", \"Table 5 missed the results of GAD.\", \"I understand that this paper is mainly for training-free settings. However, I am concerned about the practical usage. If we are provided with K-shot samples, why do we only use training-free methods without fine-tuning, as fine-tuning usually leads to much better results (as shown in Tip-Adapter)? Could authors further discuss how to develop ProKeR with fine-tuning (like Tip-Adapter-F) or could we combine ProKeR with fine-tuning methods to further boost their performances?\", \"ProKeR seems sensitive to different kernels. Could authors further discuss why specific kernels have better results for better understanding?\", \"An important goal of fine-tuning is to maintain the generalization ability of CLIP. Could authors report the base-to-new setting results as done in GDA to demonstrate the generalization ability of ProKeR?\", \"It seems that ProKeR only surpasses GDA by a little margin on CoOp's datasets, but with nearly 3 times of inference time. Could authors further discuss the advantages of ProKeR over GDA?\", \"Hyperparameter $\\\\lambda$ is an important factor of the proposed method, but the sensitivity analysis is placed in the appendix. It is suggested to place this part in the main paper part. Moreover, it is strange that authors use $\\\\lambda$ instead of true values for sensitivity analysis, could authors specify the values of $\\\\lambda$ for analysis for better understanding?\"], \"questions\": \"See weakness.\\nI am happy to revise my score if authors could address my concerns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose ProKeR, a training-free few-shot adaptation method for vision-language models like CLIP that leverages a kernel perspective. Building on Tip-Adapter, they frame the caching-based adaptation as a kernel regression problem, introducing a proximal regularization in a reproducing kernel Hilbert space to capture both local and global features. ProKeR integrates global regularization with few-shot data, incorporating global information in local adapters. Extensive experiments on both fine-grained and OOD datasets demonstrate that ProKeR consistently outperforms existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"ProKeR\\u2019s kernel-based approach is theoretically grounded and provides a fresh perspective for improving Tip-Adapter\\u2019s framework.\", \"ProKeR effectively enhances local adapters with global information, striking a balance between local adaptability and global regularization to prevent overfitting.\", \"The method is both memory- and computation-efficient, leveraging Random Fourier Features and a closed-form solution in kernel ridge regression to reduce resource demands.\"], \"weaknesses\": [\"ProKeR is only tested on training-free few-shot adaptation, which restricts its scope. Exploring the effects of loosening or reinforcing this constraint could enhance the method's practical relevance. For instance, would additional computational resources improve ProKeR\\u2019s performance, for example by training? Alternatively, if no few-shot samples were available, could it still operate effectively? Additionally, recent methods like DMN[1], which utilize an attention-based cache for both zero-shot and few-shot adaptation in training and training-free contexts, are absent from ProKeR\\u2019s comparison set.\", \"ProKeR aims to enhance few-shot adaptation by aligning few-shot and zero-shot features in RKHS. However, the paper does not adequately assess the effectiveness of this alignment. A deeper analysis of feature alignment, such as using metrics like cosine similarity between few-shot and zero-shot features or visualizations via t-SNE, could bolster the credibility of the method and provide clearer insights into how effectively ProKeR bridges this feature gap.\", \"ProKeR is evaluated for few-shot adaptation only in the experiments. It is not explicitly discussed if the method could handle zero-shot adaptation instead.\", \"[1] Zhang, Yabin, et al. \\\"Dual memory networks: A versatile adaptation approach for vision-language models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024.\"], \"questions\": [\"Additional comparisons with more recent few-shot adaptation methods, e.g. DMN, is necessary.\", \"Discussion and/or experiments on zero-shot adaptation is important to fully explore the boundary of the proposed method.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focuses on the few shot adaptation of large scale vision-language models (VLMs), particularly for CLIP-based cache model.\\nThe paper first rethinks the Tip-Adapter from the perspective of kernel, and then proposes a training-free method called ProKeR (Proximal Kernel ridge Regression) based on such kernel perspective. \\nExtensive experiments reveal that ProKeR achieves a new SOTA on few-shot classification benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.The paper provides a novel theoretical framework for better understanding the caching models, coming with detailed derivation.\\n\\n2.The proposed ProKeR achieves competitive performance on most few-shot classification datasets.\", \"weaknesses\": \"1.Rigorousness of theory:\\n\\n\\u2460Some claims have no theoretical basis, e.g., in line 201 & 277-279, the author claims that the regularization preserves prior knowledge, and predictions are not far from the zero-shot predictor, which is better for few-shot classification. \\nBut why predictions close to zero-shot predictor will be better, can you provide some theoretical evidence? \\n\\nRelatively, there are some contrasting methods, like AMU-Tuning, which does not perform regularization or require to be close to zero-shot predictor, but it gets better results.\\nMoreover, after introducing features from extra model, AMU-Tuning\\u2019s prediction will be further away from zero-shot predictor, but it achieves much better performance (about 5% higher than author\\u2019s on ImageNet before fine-tuning).\\nThe important thing is that a single model\\u2019s prediction have low performance, but when their prediction are combined, it will achieve a high performance.\\n\\nDoes the conclusion of AMU-Tuning indicate that CLIP's zero-shot prediction is sub-optimal? This seems contradict to author's claims and motivation.\\nCan author compare their methods, and explain why AMU-Tuning\\u2019s prediction is far from CLIP's zero-shot predictor, but achieves extremely better results?\\nIs this reveals CLIP's zero-shot prediction is sub-optimal? \\nOr is it because your method does not fully capture novel knowledge from support set? \\nIs this related to overfitting on the few-shot data?\\nIf CLIP's zero-shot prediction is indeed sub-optimal, is it reasonable for author to believe that the prediction should be close to CLIP's zero-shot prediction?\\n\\n\\u2461There are many estimates and approximations in paper, but without providing any analysis of errors. However, in extreme low-data situations, such errors may be fatal.\\nFor example, from equation 6 to equation 7, you have quantified the continuous mapping into a discrete form. But in fact, for few-shot problem, the value of K is very small. \\nIt is suggested to calculate the quantization errors and analyze the impact of K on such errors.\\n\\n\\n2.The contribution and practical value.\\nThe paper integrates large number of components like RBF, LLR, local method, global metric, RKHS, RFF, Polynomial, etc., making the mechanism very complex and cumbersome, but the advantages compared to SOTAs are not obvious.\\nIn summary, it has complex mechanism and non-competitive resource consumption, but does not achieve much better performance, i.e., on CoOp\\u2019s benchmark, only a bit higher than GDA.\\nA comprehensive component ablation should be conducted for readers to clearly identify the main components and their effect.\\n\\n\\n3.Weakness on writing. \\nThe paper lists a series of components and proprietary terms, some of which are secondary, but the author only describes them one by one without highlighting the core components.\\nAnd some components, like RKHS, have no introduction or contextual support, making it difficult for readers to understand their principles and why author need to use them.\\nThe mentioned problems make the key points of paper not clear and prevent readers to understand it. \\nIt is more like a document introducing the process rather than explaining an interesting method.\\nI think the authors should reorganize the paper with a high-level overview, a clear pipeline illustration and a road-map of their method early in the paper. \\nAnd they can consider including the descriptions of the unimportant components into the appendix to highlight their core framework.\", \"reference\": \"AMU-Tuning: Effective Logit Bias for CLIP-based Few-shot Learning\", \"questions\": \"1.I think the paper has shortcoming in theory or motivation, which needs to be researched and revised.\\n\\n2.The insight is inspiring, but the author should simplify their methods and improve the writing to make the paper looks more appealing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
C7XoUdJ5ZC
FLAIR: FEDERATED LEARNING WITH AUGMENTED AND IMPROVED FEATURE REPRESENTATIONS
[ "Sujit Chowdhury", "Aritra Bhaduri", "Raju Halder" ]
Federated Learning (FL) enables collaborative model training across decentralized clients while preserving data privacy. However, its performance declines in challenging heterogeneous data settings. To mitigate this, existing FL frameworks not only share locally trained parameters but also exchange additional information -- such as control variates, client features, and classifier characteristics -- to address the effects of class imbalance and missing classes. However, this leads to increased communication costs and heightened risks of privacy breaches. To strike a balance between communication efficiency, privacy protection, and adaptability to heterogeneous data distributions, we propose FLAIR, a novel FL approach with augmented and improved feature representations. FLAIR utilizes Class Variational Autoencoders (CVAE) for feature augmentation, mitigating class imbalance and missing class issues. It also incorporates Reptile meta-training to facilitate knowledge transfer between model updates, adapting to dynamic feature shifts. To generalize model update, FLAIR shares only local CVAE parameters instead of local model parameters, which reduces both communication costs and privacy risks. Our experiments on benchmark datasets -- such as MNIST, CIFAR-10, CIFAR-100, and TinyImageNet -- demonstrates a significant enhancement in model convergence and accuracy compared to state-of-the-art solutions, while reducing communication overhead and privacy risks.
[ "Federated Learning", "Class Variational Autoencoders", "Feature Augmentation", "Data Heterogeneity", "Privacy Preservation", "Communication Efficiency" ]
https://openreview.net/pdf?id=C7XoUdJ5ZC
https://openreview.net/forum?id=C7XoUdJ5ZC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vBNONm3a26", "sUYHDCkUmg", "k4XnPRvYJH", "b2CctI2L6q", "PiMy8RZtaF" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730678279439, 1731294429957, 1729861090701, 1732566847681, 1730646103937 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10958/Reviewer_xtHD" ], [ "ICLR.cc/2025/Conference/Submission10958/Reviewer_Dw88" ], [ "ICLR.cc/2025/Conference/Submission10958/Reviewer_aHcL" ], [ "ICLR.cc/2025/Conference/Submission10958/Authors" ], [ "ICLR.cc/2025/Conference/Submission10958/Reviewer_UNqK" ] ], "structured_content_str": [ "{\"summary\": \"\\\"FLAIR: Federated Learning with Augmented and Improved Feature Representations\\\" introduces the use of Class Variational Autoencoders (CVAE) to tackle challenges in federated learning, specifically addressing non-IID data distributions and communication overhead. The authors innovate by leveraging autoencoders to reduce communication costs, pointing out that traditional gradient sharing increases overhead. This new method is evaluated against different datasets and baseline methods as a solution to train models under non-iid setting. The authors also present theoretical guarantees of convergence, generalizability and robustness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"**Addressing Non-IID Data and Communication Constraints:** The authors address training models with non-IID data distributions while facing communication overhead limitations by proposing the use of Class Variational Autoencoders (CVAE).\", \"**Identification of Communication Overhead in Federated Learning:** The paper points out that gradient sharing in standard Federated Learning algorithms leads to increased communication overhead and how FLAIR attempts to address this.\", \"**Multiple Experiments:** Although the explicability of these experiments is a main concern which I explain in the weakness section, the study does evaluate the proposed approach against five baseline methods across three different datasets.\", \"The authors provide the code, facilitating reproducibility of their experiments and results.\"], \"weaknesses\": [\"**Claim 1 is not substantiated:**\", \"The **central** claim of the paper is that communication complexity is reduced, as indicated in Table 1. We know that FLAIR achieves O(E) communication complexity, but it is not evident how this is lower than O(St) or other methods ( ie: no clear evidence that E << St ). Additionally, assuming E << St , while the choice to exchange CVAE parameters instead of model gradients appears to be a valid solution for reducing per-round complexity, especially since the baselines considered all use model gradients, this does raise the question: Are the authors the first to propose this alternative in distributed learning? If so, this should be prominently highlighted. If not, why were other baselines that do not exchange model gradients not considered?\", \"**Claim 2 is not substantiated:**\", \"FLAIR is claimed to reduce privacy risks. In my opinion, this claim is unsubstantiated. The authors use the term \\\"privacy\\\" very loosely and mention terms like \\\"privacy attacks\\\" (Line 171) without clear definitions. Even setting aside the lack of rigor in terminology, I require more convincing evidence regarding the types of attacks considered. Gradient leaks are not applicable since there is no gradient sharing, but **how does the algorithm protect against Membership Inference Attacks (MIA)?** This would imply that an adversarial server cannot distinguish between two datasets with a missing sample based on the exchanged CVAE parameters, similar to standard Differential Privacy (DP) settings. Maybe a different definition of MIA is considered. Regardless, no details are provided on how exactly the privacy metrics are evaluated in this context.\", \"**Multiple Unsubstantiated Claims eg:**\", \"**Line 429:** \\\"Performance gap widening as dataset complexity and heterogeneity increases.\\\" *What evidence supports this statement?*\", \"**Line 445:** \\\"... noticeable jumps.\\\" *Where can I find the evidence for this claim?*\", \"**Experimental Issues:**\", \"**Beta Values:** These are not clearly explained. There appears to be some relation in the label skew section, but it is unclear how this causes non-overlapping classes across different clients.\", \"**Feature Skew Condition:** This is not well-explained in my opinion, and there seem to be missing citations for these definitions, which exist in the literature but not cited by the authors.\", \"**Increasing Noise Levels Experiment:** It is unclear whether this refers to DP-SGD. The paper does not explain this aspect adequately to me.\", \"**Ablation Studies:** There is no ablation on the choice of hyperparameters used in the combined loss function.\", \"**Missing References/Grammatical Errors:**\", \"**Line 169:** Wording error\", \"**Appendix**: What are proofs 4, 5, and 6, and how do they relate to the main theorems?\", \"**Theoretical Concerns:**\", \"**Theorem 1:** Although the proof relies on strong convexity, my primary concern is how convergence guarantees **are provided despite client drift**. I did not replicate the proof, but it is surprising that unbounded client drift would still allow the model to converge. If this is indeed the case, the authors must highlight it.\", \"**Theorem 2:** It is unexpected that the generalization bound has no constraints on the choice of hyperparameters in the combined loss function.\", \"**Theorems 3 and 4:** These rely on unspecified regularity conditions. The authors mention \\\"under suitable conditions\\\" but do not provide these conditions.\", \"**Models and Datasets:**\", \"The study focuses on CNNs and vision tasks, which is not a major concern. However, a brief explanation of why only vision tasks were chosen would be beneficial.\"], \"questions\": \"Refer to the Weaknesses Section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors of this work propose an FL framework that exploits conditional variational auto-encoders (CVAE) to tackle the data heterogeneity among clients participating in the training. They use a collaboratively learned CVAE to mitigate the heterogeneity that might arise from missing classes or extreme data distribution shifts. The authors provide theoretical analysis and experiments with their proposed solution\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The experiment section of the work shows considerable improvements that FLAIR can achieve over prior work, especially in more extreme non-IID scenarios. The authors mix two key methods in machine learning to develop their solution (i) Reptile meta-learning, and (ii) CVAE.\", \"weaknesses\": \"The main weaknesses of the work include the following:\\n1. How does the work on CVAE compare against other prior works that explore VAE in FL? \\n2. Is it a fair evaluation of this work to say that instead of encoding and decoding the raw input and label the authors instead use an encoded version (which is compressed in terms of dimensionality)? Given this difference in input dimensionality, could the authors comment on the following:\\n\\t- are the number of model parameters between all the compared models similar? For example, in the case of FLAIR, the number of training parameters includes those from CVAE, the feature extractor, and the classifier.\\n\\t- What is the effect of the Reptile meta-learning algorithm on CVAE training? Have the authors seen any significant drop in performance if they do not use it? Also, from algorithm 1, it is unclear which steps represent the change based on the meta-learning. \\n\\t- What is the computational complexity of FLAIR? How do the baselines compare to FLAIR regarding FLOPS vs accuracy or wall-clock time vs accuracy?\\n3. Figure 1 is very hard to read. Authors should consider adding a legend to explain the chronology of steps, and the meaning of different arrows (dashed, solid, colored, etc.)\\n4. In terms of organization, authors could possibly present CVAE, Reptile meta-learning, etc. as preliminaries before jumping into the main training in 2.1 and 2.2. Also, the theorems section does not present any equations making it hard to understand the theoretical contribution of this work.\\n5. The role of equation 3 is not particularly clear in the text. The fact that ${\\\\mathcal{L}}_{vf}$ and $\\\\tilde{\\\\mathcal{L}}_{vf}$ have opposing signs makes the loss similar to a min-max optimization. Is that the case? If so, the authors should clarify this point and present more details about the training objective. E.g., \\n\\t- what is the impact of removing the inter-class and intra-class consistency losses? \\n\\t- How do the authors choose these $\\\\lambda$ hyperparameters? How sensitive is the training to these?\", \"questions\": \"1. It is hard to visualize the heterogeneity based on the $\\\\beta$ values that the authors report. Could the authors instead present some metrics about how the labels are distributed across clients in the appendix instead?\\n2. Could the authors highlight details of the training, such as the total number of clients and the number of clients sampled per training round? Similarly, does the number of local epochs denote those of the client model (feature encoder and classifier) or CVAE?\\n\\t- Also what is the effect of changing these? Does FLAIR scale well over different settings of client configurations such as low client participation?\\n3. Why do the representations from different CVAEs on different clients not clash with each other? Specifically, what do the authors think makes the representation space not overlap for different classes on different clients, say the space occupied by class 1 on client 1 is the same as class 2 on client 2?\\n4. In Algorithm 2:\\n\\t- during the initial phase, what representations do the CVAE train on? At this stage, the model parameters for feature extraction should not be well-trained.\\n5. Do the authors repeat their experiments over multiple trials? If so, do the numbers in the table represent the means of the trials? It would be helpful to look at the mean and standard deviations to understand the stability of the method\\n6. In Table 2 and Table 6, why do the authors consistently highlight only their performance numbers even if the baselines outperform them? It makes it quite hard to read the tables\\n7. Authors mention that \\\"FLAIR exhibits faster convergence rates\\\". Could they point to the section that shows these experiments?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a method, FLAIR, to address statistical heterogeneity among client datasets in the global federated learning task, specifically targeting communication overhead and privacy concerns associated with methods that exchange additional information. The authors tackle this problem by using an alternative approach to knowledge sharing through a CVAE model integrated into both server and client. To incorporate the CVAE into the local model training process, they modify the loss function to train the local model with three distinct components. Additionally, they introduce a Reptile meta-learning-based procedure to train the CVAE model. Extensive experiments are conducted across various scenarios to validate the effectiveness of the proposed method. Overall, the authors present a promising model to address the limitations of additional information-sharing methods, and the experimental results suggest its effectiveness. However, a more detailed analysis of the CVAE's role in performance improvements is necessary.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The authors propose an alternative approach to transmitting local model parameters directly, aiming to enhance both communication efficiency and privacy while improving overall performance.\\n\\n2. In addition to experiments measuring accuracy, the authors also conduct experiments to quantify the level of privacy, providing numerical evidence of the proposed algorithm's privacy guarantees.\", \"weaknesses\": \"1. The Conditional Variational Autoencoder (CVAE) applied in the paper was introduced in 2015, and since then, various other VAE techniques have been proposed. Therefore, alternative VAE techniques could potentially serve as mechanisms for sharing information between the server and clients. However, the paper does not clearly explain why CVAE, in particular, was chosen for the federated learning framework.\\n\\n2. This lack of clarity is also reflected in the experimental results. The experiments do not include an ablation study that would clarify the specific role of CVAE in the proposed method. For instance, an ablation could involve adding each of the three individual losses in the local model's loss function one by one or replacing CVAE with a vanilla VAE to observe the impact on performance.\\n\\n3. Furthermore, it is unclear which part of the algorithm is dedicated to the \\u201cReptile-based\\u201d approach. The algorithm seems to resemble a standard CVAE training procedure, so clarification is needed on the difference between the vanilla update and the Reptile-based update in this context.\", \"questions\": \"1. Is it \\\"Class VAE\\\" or \\\"Conditional VAE\\\"? The abbreviation \\u201cCVAE\\u201d is mentioned in both the abstract and introduction but appears to refer to different terms.\\n\\n2. In Table 1, what is the difference between the total number of clients and the number of local models? If each client has a single local model, shouldn't the FLAIR method also be computed as \\ud835\\udc42(\\ud835\\udc38\\u00d7\\ud835\\udc46_\\ud835\\udc61)?\\n\\n3. What is the computational overhead of the additional update step for CVAE after local model training?\\n\\n4. In Section 2.2, what is the purpose of generating and including \\ud835\\udc66_tild in training? Unlike other losses, the reason for including this loss is not explained explicitly.\\n\\n5. In Section 2.5 on line 307, where are lines 34 and 35?\\n\\n6. In Section 4.2, line 356, the phrase \\u201cthree widely-used datasets\\u201d should be revised to \\u201cfour widely-used datasets\\u201d since the experiments are conducted on MNIST, CIFAR-10, CIFAR-100, and TinyImagenet.\\n\\n7. For reproducibility, it would be helpful to specify the hardware environment used in the experiments, including details like OS, CPU, and GPU.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes a novel FL method called FLAIR, which aims to strike a balance between communication efficiency, privacy protection, and adaptability to heterogeneous data distributions. Specifically, FLAIR utilizes Class Variational Autoencoders (CVAE) for feature augmentation, mitigating class imbalance and missing class issues. It also incorporates Reptile meta-training to facilitate knowledge transfer between model updates, adapting to dynamic feature shifts. To generalize model update, FLAIR shares only local CVAE parameters instead of local model parameters, which reduces both communication costs and privacy risks. Empirical experiments with extensive analysis on image classification datasets demonstrate the superiority of FLAIR in terms of test accuracy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. Data-free black-box knowledge transfer across heterogeneous clients in Federated Learning (FL) is interesting and promising.\", \"weaknesses\": \"1. There are some vague and confusing expressions in the paper, which seriously reduces the readability of the paper.\\n\\n2. This study lacks innovation, as previous studies have used similar simpler strategies but have a wider range of applicability, such as FL for model heterogeneity.\\n\\n3. Lack of ablation research on multiple hyperparameters in the proposed method.\", \"questions\": \"1. The display of FLAIR in Figure 1 is complex and has low readability. Merely relying on Figure 1 cannot effectively grasp the contribution of this work. Suggest further optimizing Figure 1 to make it more readable.\\n\\n2. There are many details errors here, such as the fact that the parameter $\\\\theta$ in Eq. (2) does not indicate whose parameter it is, and is $\\\\theta_{k, t} $(see Eq. (2)) the same as $\\\\theta_{t, k} $(see Eq. (3))? I strongly recommend the author to double check the details of the wording in the paper.\\n\\n3. Existing work [1] seems to use a similar strategy (using variational autoencoder). However, this paper lacks attention and comparison to it. I think it should be an important comparison method.\\n\\n4. I did not see the setting of the number of clients during the experiment, that is, the value of $K$.\\n\\n5. All the report results in the paper are the final evaluation indicators, such as accuracy, which is insufficient. Therefore, the learning curves and communication rounds should also be reported to demonstrate the training process of FLAIR and baselines.\\n\\n6. From the method description section, it can be inferred that multiple hyperparameters such as $\\\\lambda_f$, $\\\\lambda_\\\\widetilde{f}$, $\\\\lambda_c$ and $\\\\lambda$ are introduced during the training process of FLAIR. However, the ablation experiment lacks detailed numerical experimental research on them. \\n\\n7. I don't understand the significance of the content reported in Table 3. Since FLAIR cannot obtain privacy estimates in MIA, Gradient Leave, and Info Theoretic, why report them?\\n\\n[1] Heinbaugh C E, Luz-Ricca E, Shao H. Data-free one-shot federated learning under very high statistical heterogeneity[C]//The Eleventh International Conference on Learning Representations. 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
C6hUK6Q1Pi
OmniParser for Pure Vision Based GUI Agent
[ "Yadong Lu", "Jianwei Yang", "yelong shen", "Ahmed Hassan Awadallah" ]
The recent advancements of large vision language models shows their great potential in driving the agent system operating on user interfaces. However, we argue that the power multimodal models like GPT-4V as a general agent on multiple operating systems across different applications is largely underestimated due to the lack of a robust screen parsing technique capable of: 1) reliably identifying interactable icons within the user interface, and 2) understanding the semantics of various elements in a screenshot and accurately associate the intended action with the corresponding region on the screen. To fill these gaps, we introduce OmniParser, a comprehensive method for parsing general user interface screenshots into structured elements, which significantly enhances the ability of GPT-4V to generate actions that can be accurately grounded in the corresponding regions of the interface. We first curated an interactable icon detection dataset using popular webpages and an icon description dataset. These datasets were utilized to fine-tune specialized models: a detection model to parse interactable regions on the screen and a caption model to extract the functional semantics of the detected elements. OmniParser significantly improves GPT-4V's performance on ScreenSpot benchmark. And on Mind2Web and AITW benchmark, OmniParser with screenshot only input outperforms the GPT-4V baselines requiring additional information outside of screenshot. We further demonstrate that OmniParser can seamlessly integrate with other vision language models, significantly enhancing their agentic capabilities.
[ "Multimodal agent; GUI screen parsing;" ]
Reject
https://openreview.net/pdf?id=C6hUK6Q1Pi
https://openreview.net/forum?id=C6hUK6Q1Pi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rWZLSK9Lg6", "cch1RB4E33", "Vm0WzWyhej", "Q3xd0rG427", "BGEocYOYrT", "0ZRIzZwWoQ" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "decision", "meta_review" ], "note_created": [ 1730115816796, 1730624890598, 1730643551797, 1730596120819, 1737523860761, 1734721842763 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7757/Reviewer_HZMo" ], [ "ICLR.cc/2025/Conference/Submission7757/Reviewer_DUX7" ], [ "ICLR.cc/2025/Conference/Submission7757/Reviewer_ZAy2" ], [ "ICLR.cc/2025/Conference/Submission7757/Reviewer_MNEC" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7757/Area_Chair_A6NC" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces Ominiparser, a modular method designed to enhance large vision-language models e.g., GPT-4V, specifically for parsing user interface (UI) screenshots and grounding abilities to screen elements effectively.\\nOminiparser addresses current limitations in screen parsing, especially the inability of existing models to detect interactable regions and associate actions with specific areas in a UI. \\nTo achieve this, the authors:\\n1. curate a dataset for interactable icon detection and icon description from DOM tree; \\n2. subsequently using them to fine-tune models including a captioner BLIP-2 and a detector YOLO, served as a combination solution.\\n3. Ominiparser is tested on benchmarks like ScreenSpot, Mind2Web, and AITW, which connects with an advanced planner, show substantial improvements over baseline models (e.g., GPT-4V), enabling process screenshots alone without relying on additional HTML or view hierarchy data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work focuses on a critical problem in UI vision understanding: recognizing elements solely from visual data, i.e., screenshots, rather than relying on DOM data.\\n\\n2. The proposed strategy is flexible and can be applied across multiple planning frameworks.\\n\\n3. The grounding data takes into account the functionality of the icon and has a high-level understanding.\\n\\n4. The experiment has an online setting on Window Agent Arena, which is helpful.\", \"weaknesses\": \"1. While this work provides great practical contributions, the main focus is on its novelty and innovation; it mainly involves fine-tuning existing models using new data sources and applying these models through ensemble methods. The existing innovations do not meet the criteria for ICLR submissions, but are suitable for submission in technical reports or industry workshops.\\n\\n2. There is limited analysis and discussion regarding UI visual understanding across platforms or data sources. For instance, while the community has substantial OCR data related to UI, icon-specific data remains scarce. The paper does not include distribution details on the proposed data, such as the number or sources of icons and software applications represented.\\n\\n3. The authors do not provide analysis on the impact of prompt length when combining data from different sources (e.g., GPT-4, GPT-4V, with or without additional tool-generated information). Additionally, there is no breakdown of the financial cost or computation time associated with using GPT-4V alongside Omniparser to complete individual tasks.\\n\\n4. The paper does not propose an innovative strategy for handling UI visual understanding with models like BLIP-2 or YOLO, particularly given that UI screenshots are often large (e.g., 2K resolution). Efficiency strategies could enhance processing in such high-resolution contexts.\", \"questions\": \"Small issue -- Typo:\\nline 837 -- QWen -> Qwen\\n\\n- Instead of applying both BLIP-2 or YOLO, how about train an open-world detector that let GPT4 to determine which element we should ground and output a text query, then use this query for object grounding? \\n\\n- would the model, training and benchmark data will be open-source?\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"The Ethics lie in the training data collection. The author should provide a clarification about this part.\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces OMNIPARSER, a universal UI interface parsing methodology that addresses the challenges of reliably identifying interactive elements in user interfaces. By developing a dataset based on popular webpage DOM trees for interactive area detection and an icon-description dataset, the approach trains task-specific models to understand the semantics of elements within screenshots and accurately map intended actions to corresponding screen regions. OMNIPARSER, through its integration of multiple fine-tuned models, achieves enhanced screen understanding capabilities and demonstrates significant improvements across benchmarks including ScreenSpot, Mind2Web, and AITW, validating its seamless compatibility with existing vision-language models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors conducted comprehensive evaluations on multiple benchmark datasets, with OmniParser achieving state-of-the-art results across all datasets.\\n\\n2. OmniParser does not rely on additional information like DOM or view hierarchy, making it more generalizable.\\n\\n3. The authors performed extensive ablation studies validating the effectiveness of the proposed ID and IS modules, and verified the robustness of the entire framework on open-source Llama-3.2-V and Phi-3.5-V models.\\n\\n4. The authors released their code and models, making it easy for researchers to reproduce the results.\", \"weaknesses\": \"1. OmniParser's strong performance heavily relies on the powerful backbone model (GPT4-V), and switching to open-source models would significantly decrease its performance.\\n\\n2. The entire pipeline is not end-to-end, which increases its complexity and inference latency.\", \"questions\": \"1. If GPT-4V can locate objects without relying on SOM in the future, would this pipeline still be effective?\\n\\n2. Would better results be achieved by training an end-to-end model directly with constructed data, rather than relying on additional fine-tuned models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces OMNIPARSER, a method designed to enhance the action-generating capabilities of multimodal models like GPT-4V when interacting with user interfaces. Specifically, the paper collects an interactable icon detection dataset with 67,000 unique screenshot images and DOM annotations. Then, the YOLO-v8 model is fine-tuned to detect icons within a screenshot. Additionally, an icon description dataset with 7,185 icon descriptions is built, and BLIP v2 is fine-tuned on it to output icon functionalities. Consequently, given a screenshot, OMNIPARSER can detect UI elements through the proposed icon detection, caption, and OCR models. It can further incorporate GPT-4V to generate GUI actions. Experiments on Mind2Web and AITW demonstrate promising improvements over the baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper proposes an alternative method for parsing GUI elements in screenshots by using an icon detection model combined with a description model to generate comprehensive information about the elements.\", \"The collected dataset on icon detection and description is expected to be highly beneficial for the community.\", \"The experiments are rich and thorough. And the benchmark performance shows promising results.\"], \"weaknesses\": \"1. Since the paper mainly uses existing YOLO-v8 and BLIP v2 models, its primary contribution lies in the proposed icon-detection and description datasets. However, many details regarding dataset construction and data statistics are missing.\\n\\n - The authors mention that the main focus is on collecting an \\\"INTERACTABLE REGION DETECTION\\\" dataset. However, how are elements deemed interactive? The DOM does not directly provide metadata indicating interactivity, only element types. How was this implemented, and does this process have a risk of misclassification?\\n \\n - For the icon description dataset, the authors used GPT-4o for labeling. However, GPT-4o itself has limitations in icon recognition. What mechanisms did the authors employ to ensure data quality, and what is the accuracy of the generated descriptions?\\n \\n - The authors mention annotating 7,185 icon-description data points. Are all these icons unique? Could the authors provide data distribution examples, including icon types?\\n\\n2. The authors mention merging DOM and OCR results by calculating overlap and merging bounding boxes with high overlap (>90%). For web applications, does this approach introduce substantial redundancy? For instance, in the bottom-left image of Figure 2, the bounding box for \\\"Contact Us\\\" is likely derived from the DOM and should generate a larger box, while OCR might produce a tighter one with lower overlap. More examples demonstrating the effectiveness of the merging strategy would be helpful.\\n\\n3. Some comparisons appear unfair. For instance, using OCR predictions for grounding in ScreenSpot will likely outperform existing MLLMs, but this does not necessarily highlight the model\\u2019s contribution. Similarly, in results on other benchmarks, it is difficult to determine whether OCR results or icon detection contribute more. Testing GPT-4V + OCR to demonstrate any performance boost would be beneficial.\", \"questions\": [\"What is the OCR module used in the Omniparser?\", \"Since each icon requires a description generated by BLIP-2, could this contribute to increased model latency?\", \"What are the performance levels of the separate icon detection and description modules, and which one currently presents a bottleneck? Additionally, it seems that no validation set is included for the icon description at this stage. It is quite hard to control when to stop training.\", \"Since GPT-4V can perform icon description directly, would it be feasible to test its performance on description tasks using GPT-4V?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces OmniParser, a vision-based GUI agent designed to enhance large vision-language models (such as GPT-4V) by effectively parsing user interface (UI) screenshots. OmniParser comprises several key components: an OCR module for extracting text elements, an icon detection model for identifying interactable icons, a captioning model to describe the functions of detected elements, and a generalist large vision-language model for interpreting the parsed information to make reasoning and action decisions. The authors curated datasets to train the icon detection and captioning models, fine-tuning them to improve their performance. They tested OmniParser on benchmarks such as ScreenSpot, Mind2Web, and AITW, demonstrating that the proposed framework significantly enhances the capabilities of vision-based agents over baseline models by enabling them to process screenshots without relying on textual inputs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well written and easy to follow. The demonstration of the GUI framework is clear.\\n2. The proposed strategy, which involves extracting the position of interactable elements along with their function descriptions, is both flexible and effective. It significantly enhances GPT-4V\\u2019s agentic capabilities, particularly the GUI grounding capabilities of the entire framework.\\n3. The data curation considers the position and function description of interactable elements, which would be beneficial to the research community if it were to be open-sourced.\", \"weaknesses\": \"1. The novelty and innovation in this work are limited. The primary approach involves training existing modules for various purposes, such as icon detection and captioning, and then integrating these modules to construct a GUI framework. This does not meet the criteria for ICLR.\\n\\n2. Missing details.\\n \\n - 2.1 The authors collected data for interactable region detection from web pages. However, in Figure 2, most examples contain text elements, with very few icons shown. If most of the interactable elements from web pages are texts, how does the model generalize to icons from other domains, such as mobile and desktop interfaces? Additionally, what are the categories of interactable elements and their distributions in the training data?\\n\\n - 2.2. The authors curate the icon description from the ScreenSpot dataset, since the data scale of ScreenSpot is not large, how do the authors guarantee the model\\u2019s generalization to other datasets, benchmarks or real-world applications?\\n\\n - 2.3 Evaluation from the cost perspective: Since OmniParser combines different modules for OCR, element detection, element description, and action decision, the inference time could be longer when compared to end-to-end frameworks, such as SeeClick[1], or more advanced VLMs (e.g., Qwen2-VL) trained with SeeClick data. Additionally, OmniParser utilizes GPT-4V in its framework, but the associated cost in USD is not included in the analysis.\\n\\n[1] Cheng K, Sun Q, Chu Y, et al. Seeclick: Harnessing gui grounding for advanced visual gui agents[J]. arXiv preprint arXiv:2401.10935, 2024.\", \"questions\": \"1. typos: l198 - l199: \\u2018click on \\u2018settings\\u201d, \\u2018click on the minimize button\\u2019 \\u2192 \\u201cclick on settings\\u201d, \\u201cclick on the minimize button\\u201d\\n2. Related questions from the weakness part.\\n3. Are the authors going to release the following? (1) the icon detection and description dataset, (2) the trained models, (3) code for data collection.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper is borderline with two reviewers slightly positive, and two reviewers on the negative side. Overall, the reviewers generally agree that the paper is well-written and the proposed method shows promising results on various benchmarks. However, they raise concerns about the novelty and complexity of the approach. More specifically, the reviewers liked strong empirical results that Omniparser shows on multiple benchmarks. Omniparser seems also be able to generalize to work with different LVLMs and across various platforms. The reviewers also liked the dataset contribution. On the other hand, the reviewers are concerned with the limited novelty of the work and feel the work doesn't meet the bar of ICLR. The pipeline is also complex, which seems an integration of existing models.\\n\\nBased on the reviews and discussion, the paper appears to fall short of the acceptance threshold. The lack of fundamental technical novelty and the complexity of the pipeline outweigh the empirical results and dataset contribution. Addressing these concerns can significantly improve the contribution of the work.\", \"additional_comments_on_reviewer_discussion\": \"Re: Novelty: The authors argue that their main contribution lies in identifying and addressing the limitations of existing LVLMs in UI understanding. They also highlight the practicality and efficiency of their approach compared to training end-to-end LVLMs.\", \"re\": \"Latency and Cost: The authors acknowledge the latency issue and plan to explore optimizations in future work. They also provide an analysis of the cost associated with using GPT-4V.\\nOverall, reviewers are not moved by the rebuttal.\"}" ] }
C6d9S2lYFN
A Comprehensive Deepfake Detector Assessment Platform
[ "Liu liu", "Zhixuan Chu", "Zhongjie Ba", "Chengyi Yan", "Ziyue Zhan", "Feng Lin", "Zhan Qin", "Kui Ren" ]
The rapid development of deepfake techniques has raised serious concerns about the authenticity and integrity of digital media. To combat the potential misuse of deepfakes, it is crucial to develop reliable and robust deepfake detection algorithms. In this paper, we propose a comprehensive **D**eepfake **D**etector **A**ssessment **P**latform (**DAP**), covering six critical dimensions: benchmark performance, forgery algorithm generalization, image distortion robustness, adversarial attack resilience, forgery localization accuracy, and attribute bias. Our framework aims to provide a standardized and rigorous approach to assess the performance, generalization ability, robustness, security, localization precision, and fairness of deepfake detection algorithms. Extensive experiments are conducted on multiple public and self-built databases, considering various forgery techniques, image distortions, adversarial attacks, and attributes. The proposed framework offers insights into the strengths and limitations of state-of-the-art deepfake detection algorithms and serves as a valuable tool for researchers and practitioners to develop and evaluate novel approaches in this field. All codes, scripts, and data described in this paper are open source and available at https://github.com/tempuser4567/DAP.
[ "deepfake detection; benchmark; evaluation" ]
Reject
https://openreview.net/pdf?id=C6d9S2lYFN
https://openreview.net/forum?id=C6d9S2lYFN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zTtnG7IEPV", "pJggddY6aE", "jc4CgCrZ7P", "Xjt0Uvz7Wg", "T2FAin5zY9", "QNqDqBG8Hi", "0k4RNWuuQP" ], "note_type": [ "official_review", "official_review", "meta_review", "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1730637548390, 1730716840118, 1734127880308, 1730621114051, 1730701015241, 1731093167221, 1737523575728 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3436/Reviewer_bffT" ], [ "ICLR.cc/2025/Conference/Submission3436/Reviewer_Gfjm" ], [ "ICLR.cc/2025/Conference/Submission3436/Area_Chair_t22o" ], [ "ICLR.cc/2025/Conference/Submission3436/Reviewer_QXSL" ], [ "ICLR.cc/2025/Conference/Submission3436/Reviewer_M56J" ], [ "ICLR.cc/2025/Conference/Submission3436/Reviewer_UP4U" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"Glad to review the paper.\\n\\nThis paper proposes a comprehensive deepfake detector assessment platform, which could provide evaluating results from multiple aspects.\\nThe platform includes organized datasets, detectors, adversarial algorithms, etc., to support future work.\\n\\nIn general, I believe the work is referenceable to related-domain researchers.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This work considers assessing deepfake detectors from multiple aspects, e.g., generalization, distortion robustness, adversarial attack resilience, forgery localization, and attribute, the range of the work is wide enough.\\n\\nThe evaluation results in this work are large enough, in addition, the authors provide a tool (platform) to support future work.\", \"weaknesses\": \"I am concerned that the evaluation experiments conducted by the authors could not support the goals of this work.\\n\\n(1) Regarding baseline detectors, the utilized 10 detectors are not well-categorized and are not with enough explanations, it is not clear why the 10 are selected, are they state-of-the-art or covering different types? , e.g., FaceX-ray (Li et al., 2020a) and Frequency-\\naware method (Tan et al., 2024) are not evaluated, and whether the selected detectors cover : data-driven, spatial artifact-based, and frequency artifact-based types. In fact, in the related work part, although some detectors are listed, the state-of-the-art (or some typical) detectors are not introduced. I doubt the selected detectors would cause biased conclusions.\\n\\n(2) Regarding innovation, the authors claim that the platform is different from existing work from the aspects of attribute assessment, adversarial attack, and forgery localization. However, only two adversarial attacks (GANprintR is the main one) are implemented on (only) two baseline detectors, and forgery localization is evaluated based on one detector only. First, the experimental conclusions would be biased based on limited attempts, such experimental settings should be explained in detail. Second, whether more adversarial attacks or forgery localization algorithms could be integrated into the platform or implemented into more baseline detectors?\\n\\n(3) Regarding findings, although multiple assessments are conducted in the work, it will be useful to summarize some findings from these aspects of evaluations, e.g., why the detectors are not generalizable or vulnerable. which aspects should these detectors improve from? any future directions for these detectors? A \\\"Findings and Future Directions\\\" section should be included that summarizes the findings from the results across every evaluation aspect and provides concrete recommendations for improving deepfake detection algorithms (based on the findings).\", \"questions\": \"My major concerns are summarized in the weakness part, questions from three aspects are expected to be responded.\\n\\nI could consider changing the score if the responses are convincing enough, or any of my misunderstandings exist.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a comprehensive Deepfake Detector Assessment Platform designed to evaluate and improve the performance of deepfake detection algorithms. The authors built a platform with 27 evaluation tasks covering six key dimensions to comprehensively evaluate deepfake detection algorithms. The platform conducts extensive experiments on public and self-built databases, considering various forgery techniques, image distortions, adversarial attacks, and attributes.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.This paper provides a comprehensive evaluation framework, provides new directions and tools for the research of deep fake detection algorithms, and also emphasizes the importance of considering the generalization ability and resistance to adversarial attacks of algorithms in practical applications.\\n2.This paper conducts experiments on multiple public and self-built databases, covering a variety of forgery techniques and algorithms, and provides rich experimental data and result analysis.\", \"weaknesses\": \"1.This paper is insufficient in terms of theoretical analysis and method design.\\n2.This paper does not focus on video detection.\\n3.There is insufficient research on forgery detection of compressed images, which is common in social media.\", \"questions\": \"It is recommended that the authors provide a detailed explanation of why the algorithms work and the rationale behind them.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a comprehensive Deepfake Detector Assessment Platform (DAP) for deepfake detection that is composed of six critical dimensions, including benchmark performance, forgery algorithm generalization, image distortion robustness, adversarial attack resilience, forgery localization accuracy, and attribute bias.\\nHowever, the reviewers raised several concerns below but the authors did not rebut:\\n1. System architecture of the proposed platform is not clear.\\n2. Performance on combination of attacks is unknown.\\n3. Rationale behind the proposed DAP is not clear.\\n4. Motivation, innovation, and writing of this paper need remarkably improvement.\\nAlthough the attempt of this work is encouraging, its current status cannot be accepted based on the high standard of ICLR.\", \"additional_comments_on_reviewer_discussion\": \"none! The authors did not rebut at all!\"}", "{\"summary\": \"This paper presents a comprehensive Deepfake Detector Assessment Platform (DAP) designed to evaluate the performance, generalization capability, robustness, security, localization accuracy, and fairness of deepfake detection algorithms. The platform covers six key dimensions: benchmark performance, forgery algorithm generalization, image distortion robustness, resilience against attacks, attribute bias, and forgery localization accuracy. Through extensive experiments on multiple public and self-built databases, the framework provides researchers and practitioners with a standardized and rigorous evaluation tool to develop and assess new approaches in the field.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. A comprehensive evaluation framework covering multiple key dimensions of deep forgery detection is provided.\\n2. The experiments are rigorously designed, using multiple public and self-built databases, as well as a large number of self-generated fake images.\\n3. Adversarial attacks and attribute bias evaluation were introduced, which are important considerations for practical applications.\", \"weaknesses\": \"1. The experimental settings are not reasonable in some scenarios.\\n2. The evaluation experiments are not sufficient.\\n3. Lacking meaningful conclusions and findings.\\n4. Video-based deep forgery detection is equally important but has not yet been addressed.\", \"questions\": \"1. Authors choose forgery location precision as the critical dimension. Not all forgery detection models can localize the forgery region, so it is not reasonable to select it as the unified performance metric.\\n2. It is not necessary to show the image processing in Fig 3 and Fig 4. \\n3. In the generalization evaluation, these detection algorithms are not newly published, and only UCF is designed for generalized forgery tasks. Thus, this sufficient experiment can\\u2019t draw meaningful conclusions and findings.\\n3. In the adversarial perturbation experiments, authors mentioned they chose the StyleAttack algorithm as the attack method. But can it be regarded as the representative attack algorithm? Please give the reasons.\\n4. For the attribute bias evaluation, the authors summarize ten detection algorithms with different metrics. However, it lacks valuable conclusions and analysis. How does it affect further detector design?\\n5. The Large Multimodal models are considered as the generation methods. What changes and differences have been brought about by this strategy? Unfortunately, the author does not discuss this.\\n\\nOverall, the paper seems more like a research report, lacking of In-depth experimental analysis and theoretical hypothesis.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a platform for assessing deepfake detection algorithms in terms of their performance, generalizability, robustness, security, localization precision, and fairness. Extensive experimental evaluations were conducted on public and author-generated datasets, showing limited performance of the existing deepfake detectors in the literature. The paper aims to offer insights into the strengths and limitations of the existing deepfake detectors. The code and data are publicly released.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Extensive evaluations were conducted on various deepfake detectors on various aspects.\"], \"weaknesses\": [\"The paper isn't well written. Many times an argument is presented without being justified or citing the literature. Please see the examples provided in the \\\"Questions\\\" section. Table 1 and Figure 1 were not referred to by the main text of the paper.\", \"The motivation of the paper isn't well articulated. It is unclear from the sentence \\\"To more comprehensively assess the detection capabilities of algorithms under various complex conditions, we have built a Deepfake Detector Assessment Platform (DAP)\\\", why this paper is important. Please explicitly articulate the extra contributions over the three cited deepfake benchmark works in Table 1. For example, why is a new evaluation platform needed? If extra aspects like \\\"attribute bias assessment\\\", \\\"adversarial attack resilience evaluation\\\", \\\"forgery localization accuracy evaluation\\\" as mentioned in line 145 are important, why not find the most suitable existing platform listed in Table and extend it?\", \"Each subsection of Section 3 Proposed Evaluation Framework contains mostly descriptions without insights into why they were proposed as written.\", \"Line 215: \\\"The platform generated a total of 5,976,145 fake images... Therefore, this evaluation can test the detection algorithm\\u2019s performance on forgery techniques that may have never been encountered before and obtain more objective generalization evaluation results.\\\" It\\u2019s unclear how merely a large number (5M) of images can achieve this goal. If a claim (which also needs to be justified) is like \\u201ca large number of deepfake algorithms spans a rich enough space that may encompass fake images generated by unseen deepfake algorithms\\u201d, then it may make more sense.\", \"The paper aims to offer insights into the strengths and limitations of the existing deepfake detectors, but it fails to do so by merely reporting the experimental results without analyzing why some were performing well or badly. It also didn't discuss the characteristics of deepfake detection algorithms that could have shed light on their performance. For example, line 373 reads \\\"The highest and lowest accuracies are 80.42% and 36.52%, respectively. Only EfficientNetB4 and UCF performed relatively well compared to the other eight detectors. Additionally, the Large Multimodal Model is the second most effectively detected category by them.\\\"\", \"Line 358 reads: \\\"Overall, the accuracy of the detection algorithms is generally low.\\\" It should at least compare these numbers with those in the literature to provide a sense of whether these numbers are abnormal.\", \"The conclusion overclaimed what the paper did, e.g., \\\"This paper analyzes two issues\\\" and \\\"we have identified potential causes of these issues\\\".\"], \"questions\": \"Other comments:\\n\\n1. Lines 67-68: \\\"... they have several shortcomings: (1) They almost exclusively depend on public databases; (2) They lack actual tests with self-generated fake data\\\" Line 200: \\\"The fake data is entirely generated by the evaluation platform itself.\\\" It's unclear why exclusively depending on public data and testing with self-generated fake data are drawbacks. Please substantiate your argument. \\n\\n1. Line 198: \\\"Except for four deepfake types in Benchmark Performance Evaluation, Large Multimodal Model is included as the fifth deepfake type.\\\" Why isn\\u2019t this included in the same block as the other 4 deepfake types? Btw, there\\u2019s also a logical issue in writing, i.e., \\u201ca (large multimodal) model can\\u2019t be a \\u201c(deepfake) type\\u201d.\\n\\n1. Line 215: \\\"The platform generated a total of 5,976,145 fake images. Through the above generation pipeline, the platform simulates the complex forgery situation in the real world.\\\" Do the images include those from video frames? If so, it\\u2019s better to separately report the number of still images and the number of videos as still images and video frames have different temporal effects on human observers.\\n\\n1. Figure 5's illustration and its corresponding text descriptions in Section 3.4 are unclear.\\n\\n1. Line 245: \\\"We selected the following nine types as common image disruptions: Compression, Brightness, Contrast, Flip, Rotation, Color, Sharpness, Blur, and Noise.\\\" What\\u2019s the justification or insight here? Are they from the common practice in the literature that is shown to be the most effective, or are they just some numbers (between lines 249-255) proposed by the authors? Without either of them, the paper reads like a manual and doesn\\u2019t have much scientific value or novelty.\\n\\n1. Line 281: \\\"We selected five attributes for evaluation, including Camera Angle, Gender, Ethnic Group, Expression, and Lighting Condition.\\\" Lack justification.\\n\\n1. Line 409: \\\"For better comparison, the results for the four deepfake categories are averaged in this section.\\\" It's unclear whether it allows better compression.\\n\\n1. Line 412: \\\"EfficientNetB4 and UCF remain the strongest detectors...\\\" Insights lacking.\\n\\n1. Could you comment on the paper on how easily the platform can be extended to include other deepfake detection algorithms? What efforts have been made to ensure this?\", \"please_improve_the_following_writing\": \"1. Line 38: It was mentioned at the beginning of the second paragraph of the intro section that \\\"Unfortunately, the detection accuracy of these detectors is actually low.\\\" This reads a bit abrupt. It should either provide references to the literature or make a forecast to the experimental results of the paper to substantiate this claim.\\n\\n1. Lines 43-45 are not well articulated. Why not just say they failed to generalize on unseen deepfake algorithms and distortion types? In addition, the bolded texts affect the logical flow of the paragraph. \\n\\n1. Lines 52-53: \\\"Consequently, in practical detection scenarios, besides basic accuracy, the capabilities of Forgery Algorithm Generalization, Image Distortion Robustness, and Adversarial Attack Resilience are equally important.\\\" They can be all important, but there\\u2019s no need to claim that they are \\u201cequally important.\\u201d\\n\\n1. Line 146: \\\"The only study that constructs a private dataset uses...\\\" Please be specific about which study.\\n\\n1. Line \\\"**D**eepfake **D**etector **A**ssessment **P**latform (DAP)\\\". Two \\u201cD\\u201ds are bolded but one \\u201cD\\u201d is in the short form. Could you figure out which \\u201cD\\u201d goes into the short form?\\n\\n1. Line 173: \\\"which covers 27 evaluation tasks...\\\" It's unclear what these tasks are.\\n\\n1. Line 191: \\\"... the platform calculates various standardized evaluation metrics ...\\\" Please be specific about the metrics being used.\\n\\n1. The statements in lines 265, 278, 292, and 355 have logical issues. The one in line 265 reads \\\"This section is primarily used to evaluate ...\\\"\\n\\n1. Line 318: \\\"The platform implements 11 popular public databases, including ...\\\" Logical flaws.\\n\\n1. Line 418: \\\"This part evaluates whether the detectors are capable of resisting adversarial attacks.\\\" \\\"whether\\\" should be \\\"to what extent\\\".\\n\\n1. Figure 1: \\\"The three bottom boxes correspond to the weaknesses of the existing DF detection algorithms. It\\u2019s logically unclear why they are not put nearer or within the \\u201cDetection\\u201d box.\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a platform called Deepfake Detector Assessment Platform (DAP), to assess Deepfake detectors in terms of benchmark performance, forgery technique generalization, image distortion robustness, resilience to adversarial attacks, forgery localization accuracy and attribution bias. The paper also goes in length to discuss the approach as well as conduct extensive experiments on many datasets, which throws insights into strengths and weaknesses of current Deepfake detectors.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is comprehensive and detailed on many fronts. It is very important to have a platform to assess Deepfake detectors. Currently, this is done by performance evaluations in different papers or in online media forensics challenges. However, what works on papers usually does not work very well on challenges. This paper attempts to consolidate Deepfake detectors using a platform.\\n\\nThe experiments, the comparisons, the datasets used, the methods compared are all very comprehensive and detailed.\", \"weaknesses\": \"While the problem the paper is trying to address is important, there are some key issues which need to be addressed.\\n\\nFirstly, the paper builds the narrative that it is proposing a platform called Deepfake Detector Assessment Platform (DAP) for rigorous benchmarking of Deepfake Detectors. But the paper does not discuss what this platform is, what the architecture is, how this platform can be utilized by the forensics community or other parties and so on. Since the paper proposes a platform, this is very important. Though some hints are there in the Github repository such as Docker and backend-api, a section describing the system architecture of the DAP platform is definitely needed. Without this section, this paper is just a benchmark paper for Deepfake Detection.\\n\\nSecond, the paper gives a lot of focus to Robustness, Image Distortions and Adversarial perturbations, which are all good experiments to have. However, the paper treats distortions such as compression, smoothening, noise, blur as independent distortions. In a realistic scenario, a combination of these will usually be applied. One more experiment which randomly picks combinations of distortions and then measuring the performance metrics is needed, and this will make the experiments catering to realistic use cases.\\n\\nThird, the paper focuses mainly on a large number of Deepfake datasets, which is good. But it will also be better, if the paper has a section where a large number of wild face images and/or videos are taken, and seeing how these algorithms perform on them. This will shed insights on how biased the Deepfake detection algorithms are.\", \"questions\": \"How do the algorithms perform on combinations of distortions and/or adversarial perturbations?\\n\\nHow do the algorithms perform on real world wild data (not part of datasets)?\\n\\nWhat is the system architecture of the proposed platform?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
C65Hpf02Ay
One-step Image-function Generation via Consistency Training
[ "Ke Liu", "Feng Liu", "Jingjun Gu", "Shuyi Zhang", "wangzhihua", "Jiajun Bu", "Bo Han", "Haishuai Wang" ]
Consistency models aim to deliver a U-Net generator to map noise to images directly and enable swift inference with minimal steps, even trained in isolation with consistency training mode. However, the U-Net generator requires heavy feature extraction layers for multi-level resolutions and learning convolution kernels with specific receptive fields, resulting in the challenge that consistency models suffer from heavy training resources and fail to generate images with any user-specific resolutions. In this paper, we first validate that training the original consistency model with a small batch size via consistency training mode is pretty unstable, which motivates us to investigate efficient and flexible consistency models. To this end, we propose to use a novel Transformer-based generator to generate continuous image functions, which can then be differentially rendered as images with arbitrary resolutions. We adopt implicit neural representations (INRs) to form such continuous functions, which help to decouple the resolution of generated images and the total amount of the parameters generated from the neural network. Extensive experiments on one-step image generation demonstrate that our method greatly improves the performance of consistency models with low training resources and also provides an efficient any-resolution image sampling process.
[ "Image generation; Diffusion models; Consistency Models" ]
https://openreview.net/pdf?id=C65Hpf02Ay
https://openreview.net/forum?id=C65Hpf02Ay
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zaTnNcldQX", "xPROBRc1Si", "w5jc1Ct8E7", "uT2nHz2xJp", "sxuurBswcg", "q1ecoxI9TQ", "pSnD6BIXGO", "oraVgPYqxr", "ltcDneAAMt", "jUQIuiSmWh", "hxLpuKPTKQ", "hW4iy7tZGw", "flAPNwMBWP", "f3DuNuYihQ", "di0K650esB", "c8brrgusHy", "bOgQgD7brp", "NLfG2wOqJR", "JIJitl5uws", "JDg5Msn8MZ", "G0iIgvJtHr", "CTWlDvLQ6s", "BZb8s9sXMZ", "9KbBqyx1xv", "8yrGYoyEC4", "7SNrK41FmT", "5swUfnL3PO", "00GNJqmfj4" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733060641002, 1732105351976, 1730647819764, 1733060677083, 1732498613419, 1732105896129, 1734070220446, 1732498574140, 1732341775150, 1732604800555, 1732604779179, 1732804833179, 1732763121842, 1732499895241, 1730698232937, 1732105396216, 1733060542537, 1732498554088, 1732719433829, 1733063233447, 1730448356641, 1733060596155, 1732105232732, 1730554705530, 1732533331106, 1732105514966, 1732373147674, 1732105117294 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Reviewer_ftNQ" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Reviewer_NMKa" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Reviewer_N1ca" ], [ "ICLR.cc/2025/Conference/Submission9675/Reviewer_ftNQ" ], [ "ICLR.cc/2025/Conference/Submission9675/Reviewer_8PPP" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Reviewer_NMKa" ], [ "ICLR.cc/2025/Conference/Submission9675/Reviewer_N1ca" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Reviewer_NMKa" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ], [ "ICLR.cc/2025/Conference/Submission9675/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer NMKa,\\n\\nWe appreciate the time and effort that you have dedicated to reviewing our manuscript. We have carefully addressed all your queries and uploaded a new PDF version. Have our responses addressed your major concerns? If you have further concerns, please discuss them with us. We will address it further. We look forward to your feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 9675\"}", "{\"title\": \"Response to Reviewer NMKa (Part I)\", \"comment\": \"**W1:** (1) Novelty ... incremental.\\n\\n**R1:** We need to emphasize that we deliver INR in consistency models mainly for **one-step any-resolution image generation**, which mainly targets the efficient generation problem when generating images with multiple resolutions or unknown arbitrary resolutions. \\n\\nOur paper shows several differences and advantages with other image generation frameworks with INRs, and the greatest advantages are the **flexible and efficient one-stage end-to-end training pipeline and the one-step generation process**. As we have discussed in \\\"diffusion models based on implicit neural representations\\\" in the related work part of the paper (lines 162-169), most works that apply INR in their works are **two-stage training pipelines**. They consider the process of converting signals to the INR (encoding signals to the INR space) and the process of diffusion on the INR space as two processes that are totally independent of each other. The two-stage approach helps to ease the difficulty of generating INRs but the training process is inflexible and the error in the first representation stage would greatly affect the performance of the second diffusion stage, e.g. only 40.40 FID reported by Functa in CelebA-HQ 64^2 dataset. In contrast, our method enjoys the **one-stage end-to-end training process** and optimizes all modules from pure data. \\n\\nNaively implementing a one-stage training pipeline for diffusion on INRs **requires evaluating the denoiser network in the image space and will face two challenges.** 1) it is unaffordable to evaluate the denoiser network in the image space for diffusion models because they need to render the generated INRs into images for each diffusion step, which makes the inference process very costly. 2) It is very hard for a denoiser network to generate INRs for images with so much noise because INRs prior to fit low-frequency signals and are hard to fit high-frequency noises. Therefore, they deliver a two-stage training pipeline to avoid these two challenges: firstly converting all signals to their INRs and directly applying diffusion on the INR space. \\n\\nIn contrast, **our method relying on consistency training has many insights and solves these two challenges in a much clever way**. 1) The consistency training enables our model to generate INRs with just a single diffusion step, therefore it is affordable for us to directly evaluate the denoiser network in the image space. 2) The target of consistency training is to train a denoiser network to map all points in the PF-ODE trajectory into the original image, therefore our denoiser network only needs to generate INRs for images with little noise, which greatly improves the training efficiency of the denoiser network. \\n\\nAs a result, we believe **our method is novel and should be distinguished from other INR-based diffusion models.**\\n\\n**W2:** (2) Related work... beneficial.\\n\\n**R2:** Thank you very much for the suggestion about additional discussion with high-resolution generation strategies. Most high-resolution generation methods are based on generation in a patch-by-patch manner, e.g. [1] suggested by Reviewer 8PPP. However, our method focuses on generating the entire image function in a single step, and the image function is represented by a global function and can be rendered into any-resolution images. Basically, **the patch-by-patch approach for generating high-resolution images is orthogonal to our method of generating the entire image function as a whole**. The former focuses more on generating local regions iteratively, while our method emphasizes the global representation. Additionally, it is entirely feasible to combine our method with the patch-by-patch strategy. Specifically, we can generate a global signal within each patch to enable fast generation with flexible resolutions at the patch level, and then use the patch-by-patch approach to assemble a high-resolution image.\\n\\n**W3:** (3) Performance ... Table 2.\\n\\n**R3:** We need to clarify that **our model is comparable with Transformer-based generators, e.g. better IS metric than DiT.** Table 2 is mainly for the ablation study. It shows that DiT should be a better denoiser for consistency models than the original U-Net, which motivates us to deliver a DiT-like encoder as our feature extraction module that handles input noisy images and modulates noise level embedding. However, as we have discussed in Lines 479-504, our method generates image function rather than directly generating fixed-resolution images. **The architectures (U-Net, UViT, DiT) that generate fixed-resolution images** suffer from multiple training and inference processes when generating images with multiple resolutions, therefore they are less flexible and have a much lower multi-resolution sampling FPS than our method.\"}", "{\"summary\": \"This paper addresses two main issues: the training instability of consistency models with small batch sizes and the limitation of generating images at fixed resolutions. To tackle these challenges, the authors propose using a Transformer-based generator along with implicit neural representations. Additionally, to improve training stability, they introduce an auxiliary task before training the consistency model, which leads to faster convergence and enhanced image generation quality. Experimental results show that this approach improves performance in one-step image generation with reduced training requirements and enables efficient, any-resolution image sampling.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"A key strength of this paper lies in its innovative design of a consistency model that supports multi-resolution sampling, overcoming the fixed-resolution limitations of traditional models. The approach also effectively addresses training instability at low batch sizes, making it feasible to train with fewer resources.\", \"weaknesses\": \"While the paper presents improved training efficiency as a key contribution, there are two aspects that raise questions regarding this claim:\\n\\n1. In comparison with Song et al.\\u2019s experimental setup, it seems expected that training with a smaller batch size would lead to lower performance. To convincingly demonstrate an improvement in training efficiency, comparing the proposed model with a consistency model trained on low batch sizes may be insufficient. Instead, it would strengthen the argument to show that the proposed method performs better than models trained with larger batch sizes.\\n2. In Figure 8, it appears that pre-training is essential for reaching the convergence point of \\u201cDenoising Distance.\\u201d However, considering the overall training time, if an additional 30 epochs of pre-training are required compared to traditional methods, it may be worth questioning whether this approach can truly be considered efficient.\", \"questions\": \"1. Total Training Time: Could the authors clarify the total training time required for the model? While it is mentioned that 30 epochs were used for pre-training, it would be helpful to know the training duration for both CM-UNet and CM-Func models.\\n\\n2. Evaluation in Table 2: The evaluation process for Table 2 is unclear, particularly regarding how the multi-resolution sampling FPS was measured for CelebA 64. Additional explanation on the methodology used for this metric would be appreciated.\\n\\n3. Effectiveness with Larger Batch Sizes: It would be interesting to know if the proposed method continues to perform better than models trained with batch sizes larger than 32.\\n\\n4. Related Works : Adding the following reference to the related work section would enhance the context of the study: Zhuang, Peiye, et al. \\\"Diffusion probabilistic fields.\\\" The Eleventh International Conference on Learning Representations, 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer N1ca,\\n\\nWe appreciate the time and effort that you have dedicated to reviewing our manuscript. We have carefully addressed all your queries and uploaded a new PDF version. Have our responses addressed your major concerns? If you have further concerns, please discuss them with us. We will address it further. We look forward to your feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 9675\"}", "{\"comment\": \"Dear Reviewer N1ca,\\n\\nWe appreciate the time and effort that you have dedicated to reviewing our manuscript. We have carefully addressed all your queries. Could you kindly spare a moment (approximately 2 minutes) to review our responses? Have our responses addressed your major concerns? If there is anything unclear, we will address it further. We look forward to your feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 9675\"}", "{\"title\": \"Response to Reviewer N1ca (Part II)\", \"comment\": \"**W5:** To ... DISTS.\\n\\n**R5:** Thank you very much for bringing our attention to these image quality assessment metrics. We mainly follow the evaluation process of current generative models, which is to evaluate the difference between the distribution of training images and generated images. We also follow your suggestion to evaluate different methods in terms of the image quality assessment metrics on two datasets. Since LPIPS and DISTS are full reference metrics, the generative models do not have ground truth, so we cannot evaluate such two metrics. We use the pyiqa package to evaluate the no reference metrics mentioned by you, i.e., NIQE for distorted images, CLIPIQA for visual consistency and comprehensibility of image content, MUSIQ for image structure and distortion in images, and MANIQA for naturalness and structural information of images. Specifically, we follow the setting mentioned in line 401 in the paper to evaluate NIQE, CLIPIQA, and MUSIQ on 50000 generated images and evaluate MANIQA on 1000 generated images for efficiency. The results are provided in the following two tables. **The best results are marked as bold and the second results are marked with underline.** \\n\\n||Models|NIQE $\\\\downarrow$|CLIPIQA $\\\\uparrow$|MUSIQ $\\\\uparrow$|MANIQA $\\\\uparrow$\\n|-|-|-|-|-|-|\\n|Cifar10-32|CM-UNet| 23.003 | 0.516 | 17.043 | 0.105\\n|Cifar10-32|**CM-Func**| **22.264** | **0.520** |**17.143** |**0.107**\\n|CelebA-64|CM-UNet | 6.810 | 0.465 | 21.198 | 0.210\\n|CelebA-64|**CM-Func** | **6.503** | **0.549** | **22.230** | **0.220**\\n\\n||Models|NIQE $\\\\downarrow$|CLIPIQA $\\\\uparrow$|MUSIQ $\\\\uparrow$|MANIQA $\\\\uparrow$\\n|-|-|-|-|-|-|\\n|CelebA-64|CM-UNet | 6.810 | 0.465 | 21.198 | 0.210\\n|CelebA-64|CM-UViT | $\\\\underline{6.564}$ | 0.516 | **22.805** | 0.194\\n|CelebA-64|CM-DiT | 6.666 | **0.561** | 22.139 | **0.234**\\n|CelebA-64|CM-Func | **6.503** | $\\\\underline{0.549 }$| $\\\\underline{22.230}$ | $\\\\underline{0.220}$\\n\\n**We find that these quantitative results are highly consistent with our conclusions in the paper (lines 499-504), which is 1) our model has a better performance than the original U-Net in terms of all metrics. 2) the performance of our function generator is comparable with the Transformer-based generators.**\\n\\n[1] Yin, Tianwei, et al. \\\"One-step diffusion with distribution matching distillation.\\\" CVPR. 2024.\\n\\n[2] Song, Yang, et al. \\\"Consistency Models.\\\" ICML, 2023.\\n\\n[3] Song, Yang, and Prafulla Dhariwal. \\\"Improved Techniques for Training Consistency Models.\\\" ICLR, 2024.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Dear Reviewer ftNQ,\\n\\nWe appreciate the time and effort that you have dedicated to reviewing our manuscript. We have carefully addressed all your queries. Could you kindly spare a moment (approximately 2 minutes) to review our responses? Have our responses addressed your major concerns? If there is anything unclear, we will address it further. We look forward to your feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 9675\"}", "{\"title\": \"Thank authors for the response\", \"comment\": \"Thank the authors for the response. I believe that incorporating INR into the consistency model can offer potential benefits, such as enabling generation at arbitrary resolutions. Thank the authors for clarifying the structural differences between this work and other INR-based approaches. However, I would suggest focusing the contribution on a major point \\u2014 whether that is any-resolution generation, high-quality generation, or computational efficiency.\\n\\nCurrently, the comparisons utilize different baselines for various aspects. While the results show certain benefits, it is challenging to see a clear advantage in any particular area. For instance, regarding any resolution generation, a systematic review and evaluation are not clearly presented. When considering generation quality, the improvement reported in Table 2 and in response to reviewer N1ca does not appear clear compared to the CM-DiT baseline. Other reviewers seem to have similar concerns. I recommend emphasizing one primary contribution with clear, compelling advantages, supported by systematic comparisons and evaluations.\"}", "{\"comment\": \"Dear Reviewer N1ca,\\n\\nWe appreciate the time and effort that you have dedicated to reviewing our manuscript. We have carefully addressed all your queries. Could you kindly spare a moment (approximately 2 minutes) to review our responses? Have our responses addressed your major concerns? If there is anything unclear, we will address it further. We look forward to your feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 9675\"}", "{\"comment\": \"Dear Reviewer 8PPP,\\n\\nWe appreciate the time and effort that you have dedicated to reviewing our manuscript. We have carefully addressed all your queries. Could you kindly spare a moment (approximately 2 minutes) to review our responses? Have our responses addressed your major concerns? If there is anything unclear, we will address it further. We look forward to your feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 9675\"}", "{\"comment\": \">First, the comparisons with one-step diffusion models are still missing. I agree that there are some differences among these methods. Nevertheless, it does not mean the results of these methods cannot be compared. Moreover, the authors claimed that \\\"distillation-based methods suffer from the bias of the pre-trained diffusion model\\\". It is unclear what the bias is exactly. It would be better to clarify it.\\n\\n**R6:** Thank you very much for your reply. We claim that \\\"distillation-based methods suffer from the bias of the pre-trained diffusion model\\\" because **these models require a pre-trained diffusion model to estimate the PF ODE trajectories.** For example, CD needs a one-step ODE solver based on a pre-trained diffusion model to generate a pair of adjacent data points on the PF ODE trajectory. The distillation indeed eases the difficulty of estimating the PF ODE trajectories in the early stage of training, which leads to a more efficient training process and slightly better performance than CT as presented in the original CM paper [1].\\nHowever, the estimated PF ODE trajectories fairly contain some errors (or bias) from the real PF ODE trajectories, which leads to a problem recognized by researchers that **distillation limits the sample quality of the resulted model to that of the distillated diffusion model**, i.e., \\\"distillation limits the sample quality of the consistency model to that of the diffusion model\\\" from [2]. In contrast, our model relies on the consistency training target that learns the PF ODE trajectories from pure training data. Specifically, the consistency training uses the Euler method as the ODE solver as an unbiased estimation in the limit of the number of discretized interval $N \\\\rightarrow \\\\infty$ [1]. The consistency training should be a more flexible and convenient training mechanism and \\\"as an independent family of generative models\\\" by [1][2]. Therefore, we focus our comparisons on **the different network architectures within the consistency training framework**. Besides the baseline U-Net model implemented in the original paper, we also implemented the consistency training with **two popular architectures (DiT and UViT) and presented lots of discussions with them.** We believe we have presented sufficient experiments for comparisons to support our major claims. \\n\\n> Second, \\\"any resolution\\\" seems to be overclaimed. The results in the resolutions higher than 128x128 are still missing. Although any resolution generation is possible, it is hard to believe whether this method really works well in high resolution.\\n\\n**R7:** Thank you for your reply. We have provided **sufficient evidence in the paper that our model is able to generate any-resolution images, including high-resolution images, even though it is trained in a relatively low-resolution dataset,** e.g., Figure 9, 18, 19, 20. The inference resolution in Figure 9 is up to 512 (of course it can be higher).\\n\\nWe need to clarify **the difference between the contribution of \\\"any-resolution\\\" and \\\"high-resolution\\\" since the target of this paper is to propose a method for any-resolution image generation.** Any-resolution image generation has extensive usage scenarios, e.g., device adaptation and network bandwidth & load time optimization. \\n\\nIn device adaptation, by leveraging a single model capable of generating images at varying resolutions, it seamlessly adapts to devices with different display capabilities, ranging from mobile devices to high-definition monitors. **Unlike traditional methods that require multiple models or specific versions tailored to each device, our approach eliminates the need for redundant training processes, significantly reducing storage and computational overhead.** \\n\\nFrom a network bandwidth perspective, **the model dynamically adjusts the output resolution based on available network conditions, making it particularly well-suited for real-time applications in environments with varying bandwidth.** By generating lower-resolution images for devices or connections with limited bandwidth, the model ensures faster load times and smoother user experiences, without compromising the quality of higher-resolution images when the network and device capabilities allow. This feature is especially beneficial in mobile and cloud-based applications where data transfer constraints and latency are critical considerations.\\n\\nAs a result, **the \\\"any-resolution\\\" feature of our approach not only optimizes resource utilization but also enhances scalability and adaptability across different devices and network conditions, providing a more efficient and flexible potential solution for dynamic image generation tasks. This contribution is totally different from those methods that target the \\\"high-resolution\\\" image generation with very good quality.**\\n\\n\\n[1] Song, Yang, et al. \\\"Consistency Models.\\\" ICML, 2023.\\n\\n[2] Song, Yang, and Prafulla Dhariwal. \\\"Improved Techniques for Training Consistency Models.\\\" ICLR, 2024.\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"After reading the response, my major concerns are still not well addressed.\\n\\nFirst, the comparisons with one-step diffusion models are still missing. I agree that there are some differences among these methods. Nevertheless, it does not mean the results of these methods cannot be compared. Moreover, the authors claimed that \\\"distillation-based methods suffer from the bias of the pre-trained diffusion model\\\". It is unclear what the bias is exactly. It would be better to clarify it.\\n\\nSecond, \\\"any resolution\\\" seems to be overclaimed. The results in the resolutions higher than 128x128 are still missing. Although any resolution generation is possible, it is hard to believe whether this method really works well in high resolution.\\n\\nBest,\\n\\nReviewer N1ca\"}", "{\"comment\": \"Thank you for providing clarification on the questions I raised. I appreciate the effort in addressing my concerns. The proposal of a consistency model capable of generating at arbitrary resolutions indeed shows potential benefits. However, I still have concerns regarding the performance of the proposed methodology. While I acknowledge the stability your approach offers for training in low-resource scenarios, I believe it is important to compare the quality of generated images directly with a consistency model trained using existing methods.\\n\\nFor example, referring to the results in Song et al. (2023), a consistency model trained on CIFAR-10 achieved an FID of 8.70 for 1-step generation. Demonstrating that your approach achieves performance at least comparable to, if not better than, these results would significantly strengthen your claims.\\n\\nFor these reasons, I have decided to maintain my score. Thank you again for your responses and the additional insights.\", \"reference\": \"Song, Yang, et al. \\\"Consistency Models.\\\" arXiv preprint arXiv:2303.01469 (2023).\"}", "{\"summary\": \"The authors observe that with low training resources and small batch size, the training of UNet-based consistency model is unstable, and proposed a Transformer-based generator that generates network parameters as INR for consistency training. The authors show better training stability and lower FID metric than the original UNet-based consistency model in the low-resource training setting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Update after rebuttal:**\\n\\n```\\nThe authors have addressed my questions and I will keep my rating. The idea of using any-resolution representation for consistency models is interesting, while I agree with other reviewers that more solid comparisons to exsiting methods could be helpful (for efficient training or for any-resolution generation).\\n\\n```\\n\\n---\\n\\n1. Using consistency training for image function generation is an interesting direction to explore. Since INRs are any-resolution decoders, it is natural to compute the consistency objective in the rendered patches.\\n2. The proposed reconstruction pre-training is simple and effective.\\n3. The authors show improved FID and other metrics. The training stability is also improved compared to the baseline UNet on common datasets.\", \"weaknesses\": \"1. It is not very clear that why modeling as INR can help improve the stability in low-resource training. With the Transformer generator and INR representation, is the input noisy image / target in training at fixed resolution or varied resolutions? More discussions about the intuition for the improvement might be helpful. Can the reconstruction pre-training also be applied for the UNet consistency model?\\n2. Despite showing many metrics, the FID values for both the baseline and proposed method are very high (though it is due to the training budget). The results will be more convincing and solid when the methods can achieve a generally better quality.\\n3. The claim the advantage of any-resolution generation, it is better to discuss and compare to more recent works that specifically works on any-resolution image generation, for example [1, 2].\\n\\n[1] Any-resolution training for high-resolution image synthesis, ECCV 2022\\n\\n[2] Image Neural Field Diffusion Models, CVPR 2024\", \"questions\": \"It is shown in the supplementary that the generated INR has better quality than bilinear interpolation when decoding to high resolutions. Is the high resolution higher than the resolution in training? If it is the case of resolution extrapolation, is any artifact observed in high resolutoins?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer NMKa (Part II)\", \"comment\": \"**W4:** (4) Computation...model.\\n\\n**R4:** Thank you very much for your suggestion. We have shown in lines 847-849 that we implemented the DiT and UViT with a close amount of learnable parameters with our function generator. So **the parameter amount and GPU cost of DiT and UViT are very close to our function generators**. We will highlight this setting in our future version. The major difference in the computation cost comparison between DiT, UVit, and our function generator is the multi-resolution sampling FPS, which has been shown in Table 2. \\n\\n**W5:** (5) It ... equations.\\n\\n**R5:** Thank you very much for pointing out our typo.\\n\\n\\n[1] Any-resolution training for high-resolution image synthesis, ECCV 2022\"}", "{\"comment\": \"Dear Reviewer 8PPP,\\n\\nWe appreciate the time and effort that you have dedicated to reviewing our manuscript. We have carefully addressed all your queries and uploaded a new PDF version. Have our responses addressed your major concerns? If you have further concerns, please discuss them with us. We will address it further. We look forward to your feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 9675\"}", "{\"comment\": \"Dear Reviewer 8PPP,\\n\\nWe appreciate the time and effort that you have dedicated to reviewing our manuscript. We have carefully addressed all your queries. Could you kindly spare a moment (approximately 2 minutes) to review our responses? Have our responses addressed your major concerns? If there is anything unclear, we will address it further. We look forward to your feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 9675\"}", "{\"title\": \"For all reviewers\", \"comment\": \"Dear all reviewers:\\n\\nThank you very much for your detailed review. Your constructive suggestions polish our paper a lot. Based on the discussion, we have added more details to our paper and uploaded a new version. The modified part is marked as orange. Please check the PDF and the Supplementary material for the full version. \\n\\nIf you have any concerns, please discuss them with us. We will address them further. We look forward to your feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 9675\"}", "{\"title\": \"Responce to author\", \"comment\": \"Thanks for the author's rebuttal. If, as responded by the author, the core innovation of this paper is \\\"any-resolution image generation,\\\" I believe it should be compared with existing diverse methods for any-resolution generation in terms of generation performance, rather than efficiency. However, such a comparison is currently missing. Moreover, the generation performance of the proposed method is only comparable to that of simply replacing UNet with a Transformer-based generator, without demonstrating the performance advantages of INR-based consistency training. While I appreciate the author's detailed responses, I believe the core issue has not been addressed. Therefore, my final decision is to reject the paper.\"}", "{\"summary\": \"This paper proposes a novel approach to image generation through consistency models, aiming to improve efficiency in generating high-quality, variable-resolution images. By adopting a Transformer-based generator that leverages implicit neural representations (INRs), the authors propose an architecture allowing flexible resolution generation with reduced resource demands. The method addresses challenges associated with traditional U-Net models by decoupling image resolution from model parameters and incorporating a pre-training phase for enhanced consistency training.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of a Transformer-based generator that produces image functions is an efficient approach that enables any-resolution sampling. This is a significant step forward from fixed-resolution U-Net generators.\\n2. By decoupling image resolution from model parameters, the proposed method reduces computational overhead and GPU memory usage, allowing more accessible high-resolution image generation.\\n3. The pre-training task effectively enhances the consistency model\\u2019s performance, leading to faster convergence and better denoising capabilities compared to models trained from scratch.\", \"weaknesses\": \"1. The paper utilizes Transformers in a relatively straightforward way for image generation. While the INR-based function generator is effective, the paper could benefit from a clearer explanation of how it fundamentally diverges from other Transformer-based models in diffusion applications.\\n2. The pre-training phase, while beneficial, adds additional complexity to the training pipeline. It would be helpful to compare the training cost between this method and other approaches.\\n3. The comparisons with existing one-step diffusion methods are missing. In fact, there are a lot of one-step methods, including ADD and DMD.\\n4. Given that the method proposed by the authors is capable of generating images of arbitrary resolution, in the selection of datasets in Section 4.1, the authors should consider including more datasets with various resolutions beyond the current 64 and 128 to facilitate a comprehensive comparison. In fact, a larger resolution has become more popular, e.g. 512 and 1024. It is hard to justify whether this method can actually accommodate arbitrary resolution without reporting the results of high resolution image synthesis.\\n5. To evaluate the method, more metrics should be considered when comparing different methods, including NIQE, CLIPIQA, MUSIQ, LPIPS, MANIQA, DISTS.\", \"questions\": \"More results are required and more methods should be compared.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer ftNQ,\\n\\nWe appreciate the time and effort that you have dedicated to reviewing our manuscript. We have carefully addressed all your queries and uploaded a new PDF version. Have our responses addressed your major concerns? If you have further concerns, please discuss them with us. We will address it further. We look forward to your feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 9675\"}", "{\"title\": \"Response to Reviewer ftNQ\", \"comment\": \"**W1**: In comparison ... sizes.\\n\\n**R1**: We have shown that the relatively high FID values are due to the low training batch size (32) that allows training with a single GPU. In fact, we have tried training with more GPUs and larger batch sizes (64 and 128). **The claim that the consistency training with UNet will oscillate is consistent as the training batch size reaches 128**, which harms the performance of consistency training. In contrast, our CM-func can have a more stable consistency training process which leads to better generation quality. It is unaffordable for us to train the model exactly following Song et al.\\u2019s experimental setup which has a batch size of 2048 or 4096. \\n\\n**W2**: In Figure 8, ..., efficient.\\n\\n**R2**: We need to emphasize that the **training time for our pretraining step is much less than the training time for consistency training, which is so resource-costly.** Take the CelebA-64 dataset (with a total of 202,599 images) as an example. Pretraining for 30 epochs (with a batch size of 32) requires approximately 190,937 iterations. This is only about one-fifth of the 900,000 iterations required for consistency training. In addition, the pretraining is applied solely to the Function Prediction Module, which accounts for less than half of the total model parameters. As a result, the time spent on pretraining is significantly lower than that of consistency training, amounting to about 5%. The exact training time for different models is presented here:\\n\\n||Total Training Time (GPU * Hours) for CelebA-64|\\n|-|-|\\n|CM-UNet| 302.5\\n|CM-DiT | 207.5\\n|CM-UViT| 212.5\\n|CM-Func w.o. pretrain| 223.75\\n|CM-Func w. pretrain| 223.75 + 12.9\\n\\n**Q1:** Total Training Time.\\n\\n**R3**: Please check the above table.\\n\\n**Q2:** Evaluation in Table 2.\\n\\n**R4**: We apologize for the unclear caption for Table 2. It should be \\\"Table for efficiency and accuracy of different generators on CelebA dataset\\\". The Accuracy is evaluated on the CebebA dataset with resolution of 64. \\n\\nAnd the multi-resolution sampling FPS is described in lines 483-485 and line 495. We use this metric to **evaluate the efficiency of different models if they are used to generate images with different resolutions**. In our setting, it is calculated as the FPS that generates 10000 image functions, each of which requires 3 different resolutions ($32 \\\\times 32$, $64 \\\\times 64$, $128 \\\\times 128$, therefore a total of 30000 images). The traditional models (CM-UNet, CM-UViT, CM-DiT) cannot sample any-resolution images, so they are required to train three separate models for three different resolutions. The multi-resolution sampling FPSs for these three models are calculated as $\\\\frac{30000}{\\\\sum_{i=1}^{10000}{T_{32}^i}+\\\\sum_{i=1}^{10000}{T_{64}^i}+\\\\sum_{i=1}^{10000}{T_{128}^i}}$, where $T_k^i$ is the average time that generates $i^{th}$ image with resolution $k$. For our CM-Func, we only need to generate one image function for each image and then can be rendered as any-resolution images, so the multi-resolution sampling FPS for CM-Func can be calculated as $\\\\frac{30000}{\\\\sum_{i=1}^{10000}{(T^i}+T_{R32}+T_{R64}+T_{R128})}$, where $T^i$ is the average time that generates $i^{th}$ image functions, and $T_{R32}, T_{R64}, T_{R128}$ are the time to render the image function to image with 32/64/128 resolutions (which is nearly negligible compared to $T^i$). This metric reflects the efficiency of sampling any-resolution images of our method compared to traditional models. \\n\\n**Q3:** Effectiveness with Larger Batch Sizes.\\n\\n**R5:** Yes. In fact, we have tried training with more GPUs and larger batch sizes (64 and 128). **The claim that the consistency training with UNet will oscillate is consistent as the training batch size reaches 128**, which harms the performance of consistency training. \\nOur model continues to perform better than the original CM at these batch-size settings. Please check the R1 response.\\n\\n**Q4:** Related Works.\\n\\n**R6:** Thank you very much for bringing our attention to this interesting work. We will add this reference to our related works. Different from our work, this work delivers explicit field characterization to model signals with different modalities (images, shapes, and spherical data). Their target is to provide a unified framework that can denoise on the field with a single training stage compared to Functa and GEM. Due to the explicit field characterization, **their model cannot achieve any-resolution image sampling as their coordinate is fixed during training**. In contrast, the models relying on implicit fields have the potential for any-resolution image sampling, such as Functa and our method. As discussed in related work in our paper, Functa requires two-stage training while our model can be trained within a end-to-end manner. In addition, our model can achieve one-step generation due to the consistency training while DPF and Functa still require multiple-step sampling because their models are implemented in DDPM.\"}", "{\"summary\": \"The paper addresses the limitations of using a U-Net generator with consistency models, i.e., the substantial computational resources required and the difficulty in generating images at user-specified resolutions. To address these challenges, the researchers propose replacing the U-Net generator with an implicit neural representation (INR), which demonstrates potential in producing images with scalable resolutions. The proposed method reduces training costs relative to the U-Net generator while achieving superior image quality as quantified by common evaluation metrics.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed method is conceptually sound and effectively addresses the limitations of the U-Net-based consistency model.\\n\\nExperimental results support the efficacy of incorporating INR within consistency models for improved image generation.\", \"weaknesses\": \"(1) Novelty: while INR is applied here for high-resolution image generation within the context of consistency models, INR is already a widely-used technique in other 2D image generation frameworks. The contribution appears to be somewhat incremental.\\n\\n(2) Related work: the review of related work on INR-based methods is somewhat insufficient, particularly in the context of high-resolution image generation. Additional discussion on alternative high-resolution generation strategies would be beneficial.\\n\\n(3) Performance comparison: although the method shows reduced computational cost, its image quality appears less competitive compared to replacing UNet with DiT, as observed in Table 2.\\n\\n(4) Computation cost comparison: it would be helpful to include a broader computational cost comparison with other methods listed in Table 2, rather than restricting comparisons solely to the CM-UNet model.\\n\\n(5) It would be clearer and more concise to use \\u201cEq.\\u201d rather than \\u201cEq. equation.\\u201d when referring to equations.\", \"questions\": \"See the limitations above, which detail the questions concerned, and it is expected to address these issues.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> I believe it is important to compare the quality of generated images directly with a consistency model trained using existing methods. For example, referring to the results in Song et al. (2023), a consistency model trained on CIFAR-10 achieved an FID of 8.70 for 1-step generation. Demonstrating that your approach achieves performance at least comparable to, if not better than, these results would significantly strengthen your claims.\\n\\n**R7:** Thank you very much for your reply. We totally agree with your opinion that it is important to compare the quality of generated images directly with consistency models. And in fact this is exactly how we evaluate our method in our paper.\\n\\nAs presented in lines 376-377 and line 388, we **exactly follow the setting from Song et al. (2023), except setting the batch size to 32 which is much smaller than Song et al. (2023)**, i.e., 512 for CIFAR-10. We try to train the consistency models with a single or just a few GPUs we can afford. The results show that the original consistency models with U-Net oscillate when training with small batch sizes, which leads to poor performance, as shown in Figure 1 (a) and Table 1. And we empirically verify that **a Transformer-based generator (including DiT and our function generator) is much more stable, achieving better performance than the U-Net generator.**\\n\\nIf you have further questions, please feel free to discuss with us.\"}", "{\"title\": \"Response to Reviewer N1ca (Part I)\", \"comment\": \"**W1:** The paper ... applications.\\n\\n**R1:** Our method fundamentally diverges from other Transformer-based models in diffusion. **The major contribution of this paper is much more than introducing Transformers in consistency models but delivering Transformers to generate an image function**, which is parameterized as the MLPs, as presented in Figure 1. We have provided a detailed discussion of the difference between our model with U-Net and other Transformer-based models in lines 479-504 and Table 2. Specifically, **the traditional consistency pipelines suffer from fixed-resolution image generation**, therefore if they are required to generate images with different resolutions, they cannot generate consistent images across different resolutions. And they are required to train separate models at different resolutions to generate images at different resolutions. The inference process is independent across different resolutions and therefore is very costly (See the Multi-Resolution Sampling Sampling FPS in Table 2). In contrast, our method delivers a Transformer to generate image functions, which can then be rendered into any-resolution images with nearly negligible time cost. Therefore, our method is quite suitable for the scenario where images with multiple resolutions are required. \\n\\n**W2:** The pre-training ... approaches.\\n\\n**R2:** Thank you very much for your suggestion. We need to emphasize that the **training time for pretraining is much less than the training time for consistency training, which is so resource-costly.** Take the CelebA-64 dataset (with a total of 202,599 images) as an example. Pretraining for 30 epochs (with a batch size of 32) requires approximately 190,937 iterations. This is only about one-fifth of the 900,000 iterations required for consistency training. In addition, the pretraining is applied solely to the function prediction module, which accounts for less than half of the total model parameters. As a result, the time spent on pretraining is significantly lower than that of consistency training, amounting to about 5%. The exact training time for different models is presented here:\\n||Total Training Time (GPU * Hours) for CelebA-64|\\n|-|-|\\n|CM-UNet| 302.5\\n|CM-DiT | 207.5\\n|CM-UViT| 212.5\\n|CM-Func w.o. pretrain| 223.75\\n|CM-Func w. pretrain| 223.75 + 12.9\\n\\n**W3:** The ... DMD.\\n\\n**R3:** Thank you very much for bringing our attention to these interesting works. We admit that several one-step methods have been proposed. However, **most of these works need to finetune or distill from an existing diffusion model**, e.g. DMD [1] mentioned by you and the CD mode from consistency models. The distillation-based methods may suffer from the bias of the pre-trained diffusion model. In contrast, our method relies on consistency training that can **train the generative model in isolation and only from data**, which should be a more flexible and convenient training mechanism as we mention in lines 40-42 and \\\"as an independent family of generative models\\\" by [2][3]. More importantly, our model **generates an image function that supports any-resolution image generation, and this is not achieved by other one-step diffusion methods.**\\n\\n**W4:** Given ... synthesis.\\n\\n**R4:** Thank you very much for your suggestion about training on higher-resolution images. However, it is quite unaffordable to train consistency models in such a high resolution. By now, we have showcased that even though we train our model in a 64 or 128-resolution image dataset, our model can generate consistent any-resolution images, e.g., images with 512 resolution in Figure 9. Since we generate an image function that can be quired with arbitrary coordinates within a continuous range, our model **can theoretically guarantee image generation with arbitrary resolution**. **Training with high resolution can improve the performance of our model on high-resolution image generation (a question about better or worse), but it does not affect the ability that our model can generate arbitrary resolution images (a question about yes or no).**\"}", "{\"title\": \"Response to Reviewer NMKa\", \"comment\": \"> However, I would suggest focusing the contribution on a major point \\u2014 whether that is any-resolution generation, high-quality generation, or computational efficiency.\\n\\n**R6:** Thank you very much for your reply. In fact, in the very beginning of the paper, i.e. (b)(c) in Figure 1 and lines 94-95, we have clearly shown that our method enjoys a great contribution that can distinguish our method from other methods, which is **the efficient any-resolution image generation**, especially when generating images with multiple resolutions or unknown resolution.\\n\\n> Currently, the ... are not clearly presented. \\n\\n**R7:** Thank you for your suggestion. We did not compare our method to existing any-resolution generation methods because **the efficiency of these methods is theoretically not on the same order of magnitude as ours, especially the efficiency when scaling to higher resolution images. We then focus on discussing the efficiency of our method in comparison to consistency models, which are highly efficient in the existing generative models.**\\n\\n**Current papers on any-resolution image generation, such as [1][2], do not provide any discussions about their generation efficiency.** This is because these methods inherently require a long time to generate any resolution images, either through patch-by-patch generation [1] or two-stage INR-based diffusion [2]. The former requires multiple forward passes to generate many patches, while the latter necessitates a large number of diffusion steps. \\n\\nWe test the inference efficiency for the patch-by-patch method [1] and find that it takes about 0.04s to generate a 256\\\\*256 patch, therefore [1] has about 25 FPS for 256\\\\*256 images and 6.25 FPS for 512\\\\*512 images. **The inference efficiency is linearly decreasing as the resolution increases.** However, the inference efficiency of our model is much less affected by the resolution of generated images, because our model only needs to generate fixed shapes for the parameters of the image function and the process of querying pixels for each coordinate can be parallel.\\n\\n||Inference FPS-256 $\\\\uparrow$ |Inference FPS-512$\\\\uparrow$ | FPS Scaling down Rate from 256 to 512 $\\\\downarrow$\\n|-|-|-|-|\\n|anyres-GAN [1] | 25 | 6.25 | 25/6.25=4\\n|Ours | **96** | **71** | **1.35**\\n\\nTherefore, **in the paper, we focus on discussing the efficiency of our method in comparison to consistency models, which are highly efficient in the existing generative models.** Consistency models based on U-Net can achieve highly efficient one-step diffusion, and our method further enhances this by 1) incorporating a more efficient network architecture that improves generation quality and computational efficiency, and 2) providing the ability to generate any-resolution images, making consistency models even more efficient. \\n\\n> When considering ... evaluations.\\n\\n**R8:** Thank you for your question. We claim that **our model is comparable with Transformer-based generators. We believe this claim should be well supported by Table 2 and the quantitative result of the IQA metrics reported in the response to reviewer N1ca.** In most metrics, our method achieves comparable or even better results than CM-DIT, such as IS and NIQE. \\n\\nBasically, it is reasonable to observe some quality performance drop if we force the network to generate an image function rather than an image. This is because directly generating an image does not require modality transformation. The network's input and output are exactly the same shape, so it is easy to use residual connections to connect the input and output and denoise the input signal. However, outputting an image function involves modality transformation (image to INR), and the network needs to achieve both denoising and modality transformation functions, which makes it harder to train a unified network to achieve both functions.\\n\\nIn fact, we present the quantitative results in Table 2 to show the effectiveness of our DiT-like encoder in our feature extraction module. **Even though generating an image function brings some difficulties to our network training, our function generator can still achieve comparable performance with those image generators.**\\n\\nIn addition, our function generator **enjoys a more flexible sampling process and higher multi-resolution sampling FPS than the traditional image generators.**\\n\\n[1] Any-resolution training for high-resolution image synthesis, ECCV 2022\\n\\n[2] Image Neural Field Diffusion Models, CVPR 2024\"}", "{\"title\": \"Response to Reivewer 8PPP\", \"comment\": \"**W1**: It is not very clear that why modeling as INR can help improve the stability in low-resource training. With the Transformer generator and INR representation, is the input noisy image/target in training at fixed resolution or varied resolutions? More discussions about the intuition for the improvement might be helpful. Can the reconstruction pre-training also be applied for the UNet consistency model?\\n\\n**R1**: We attribute the stable training to the **design of the Feature Extraction Module which is based on DiT** that delivers an adaLN-Zero layer to modulate the Transformer encoder. The original DiT for diffusion models has already been shown to exhibit superior scalability compared to U-Net. We adapt it to consistency models and the ablation study for CM-DiT (Table 2) shows that DiT has a consistent performance on consistency models (better than CM-UNet).\\nThe input noisy image and the target are of a fixed resolution, and the model outputs an image function, which can then be sampled at arbitrary resolutions. In Appendix C.3.2 and Table 4, we have discussed scenarios where the input noise resolution varies for the CelebA dataset. We find that **increasing the resolution of the input noise slightly improves the FID of the generated images due to larger noise space, though this comes at the cost of longer runtime.**\\nThe **reconstruction pre-training cannot be applied to the UNet consistency model** as the reconstruction pre-training is to train a model that can predict its INR for an input image, while the UNet cannot perform such a function.\\n\\n**W2**: Despite showing many metrics, the FID values for both the baseline and proposed method are very high (though it is due to the training budget). The results will be more convincing and solid when the methods can achieve a generally better quality.\\n\\n**R2**: Thanks for your suggestion. We have shown that the relatively high FID values are due to the low training batch size (32) that allows training with a single GPU. In fact, we have tried training with more GPUs and larger batch sizes (64 and 128). **The claim that the consistency training with UNet will oscillate is consistent as the training batch size reaches 128**, which harms the performance of consistency training. In contrast, our CM-func can have a more stable consistency training process which leads to better generation quality. \\n\\n**W3**: The claim the advantage of any-resolution generation, it is better to discuss and compare to more recent works that specifically works on any-resolution image generation, for example [1, 2].\\n\\n**R3**: Thank you very much for bringing us to these two interesting works and we will include them in our related work. [1] tries to generate any-resolution images with the manner of patch-by-patch with the supervision of GAN, which is totally different from our setting. **The output of their generator is still of a fixed resolution**, specifically \\n$\\ud835\\udc5d \\\\times \\ud835\\udc5d$, and must be square-shaped. In other words, to generate larger images, their model requires multiple runs to generate patches, which are then concatenated together. In contrast, our model only needs a single run to produce a continuous function of the corresponding image, which can then be sampled at any resolution. Furthermore, **their model cannot generate images at lower resolutions**, as the patches they generate are at least $\\ud835\\udc5d \\\\times \\ud835\\udc5d$. Our model, however, allows for sampling at any desired resolution, offering greater sampling flexibility.\\n\\n[2] is an interesting work that should be included in the \\\"Diffusion Models Based on Implicit Neural Representations\\\" section of our related work. Same as other diffusion models based on INRs, they first train an INR converter as an encoder module for represetations, and then separately train a diffusion model on these INR representations. **It is a two-stage training and inference pipeline, causing inflexible training and potential error accumulation for each stage.** In contrast, our paper proposes a novel unified architecture that is trained in an end-to-end manner and can generate image functions from noise in a single stage. \\n\\n**Q1**: It is shown in the supplementary that the generated INR has better quality than bilinear interpolation when decoding to high resolutions. Is the high resolution higher than the resolution in training? If it is the case of resolution extrapolation, is any artifact observed in high resolutoins?\\n\\n**R4**: Yes, the high resolution sampled from our generator can be higher than the resolution in training. **No artifact is observed.** Here, we follow [3] to apply variational coordinates to eliminate artifacts.\\n\\n\\n[1] Any-resolution training for high-resolution image synthesis, ECCV 2022\\n\\n[2] Image Neural Field Diffusion Models, CVPR 2024\\n\\n[3] Attention Beats Linear for Fast Implicit Neural Representation Generation, ECCV 2024\"}" ] }
C5w86qtcgY
Decentralized Finite-Sum Optimization over Time-Varying Networks
[ "Dmitry Metelev", "Savelii Chezhegov", "Alexander Rogozin", "Aleksandr Beznosikov", "Alexander Sholokhov", "Alexander Gasnikov", "Dmitry Kovalev" ]
We consider decentralized time-varying stochastic optimization problems where each of the functions held by the nodes has a finite sum structure. Such problems can be efficiently solved using variance reduction techniques. Our aim is to explore the lower complexity bounds (for communication and number of stochastic oracle calls) and find optimal algorithms. The paper studies strongly convex and nonconvex scenarios. To the best of our knowledge, variance reduced schemes and lower bounds for time-varying graphs have not been studied in the literature. For nonconvex objectives, we obtain lower bounds and develop an optimal method GT-PAGE. For strongly convex objectives, we propose the first decentralized time-varying variance-reduction method ADOM+VR and establish lower bound in this scenario, highlighting the open question of matching the algorithms complexity and lower bounds even in static network case.
[ "convex optimization", "decentralized optimization" ]
Reject
https://openreview.net/pdf?id=C5w86qtcgY
https://openreview.net/forum?id=C5w86qtcgY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xyJbMuck8X", "vgmmSoNcHX", "luh5eYD39N", "gsaeviimQY", "eHcwKWAoj5", "dvGBQNzg71", "dYNYxewhU2", "LjwZu9lzh2", "Ds3M3d4qRr", "79u7802HvK", "4MR8xxIco2", "0Oe6vYRqnw" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "decision", "meta_review", "official_comment" ], "note_created": [ 1733217724478, 1730786174937, 1730795015447, 1733217640674, 1733217577259, 1730577249769, 1733217841674, 1733217762919, 1729692193212, 1737524271987, 1734160651030, 1733217393418 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13610/Authors" ], [ "ICLR.cc/2025/Conference/Submission13610/Reviewer_2Swy" ], [ "ICLR.cc/2025/Conference/Submission13610/Reviewer_JkLV" ], [ "ICLR.cc/2025/Conference/Submission13610/Authors" ], [ "ICLR.cc/2025/Conference/Submission13610/Authors" ], [ "ICLR.cc/2025/Conference/Submission13610/Reviewer_CnvV" ], [ "ICLR.cc/2025/Conference/Submission13610/Authors" ], [ "ICLR.cc/2025/Conference/Submission13610/Authors" ], [ "ICLR.cc/2025/Conference/Submission13610/Reviewer_SSdw" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13610/Area_Chair_SeZo" ], [ "ICLR.cc/2025/Conference/Submission13610/Authors" ] ], "structured_content_str": [ "{\"title\": \"Answer to Reviewer CnvV\", \"comment\": \"Dear Reviewer, thank you for your thorough review.\\n\\n**Weaknesses**.\\n\\n**Paper organization**.\\n\\nThe aim of the paper is to propose optimal algorithms for decentralized finite-sum optimization. In modern optimization theory [3], a problem class and an algorithm class are determined by introducing corresponding assumptions. In our paper, the problem class is decentralized finite-sum problems as defined in Section 2 and the method class is determined as first-order decentralized algorithms in Section 4.1. After that, the iteration complexity of the algorithm is compared to lower complexity bounds for the given problem class. If these two complexities coincide up to a constant term, the corresponding method is called optimal. Optimal methods for strongly convex and nonconvex scenarios use different techniques and have different theoretical analysis. In particular, Nesterov momentum allows to reach optimality in the strongly convex case but is useless in the nonconvex case. Therefore, a method optimal for one scenario cannot be used for the different scenario out-of-the-box even for usual (i.e. not decentralized) optimization.\\n\\nIn our work we used previously existing techniques to build new methods for each of the cases (strongly convex and nonconvex). That is why in the strongly convex case we used ADOM+ and in the nonconvex case we applied PAGE. As mentioned above, a method designed for the strongly convex scenario cannot be directly applied to the nonconvex one and vice versa. We managed to obtain an optimal algorithm for the nonconvex case, closing the previously existing gap in the literature. We also obtained an algorithm approaching optimality for the strongly convex case.\\n\\n**Assumption 2.5 on $\\\\chi$**.\\n\\nIt is possible to construct a matrix sequence $W(k)$ satisfying Assumption 2.5 under realistic assumptions on the time-varying network. The only restriction on the network is that the graph $\\\\mathcal{G}^k$ is connected at each iteration. After that, we choose $W(k) = L(\\\\mathcal{G}^k) / \\\\lambda_{\\\\max}(L(\\\\mathcal{G}^k)$, where $L(\\\\mathcal{G}^k) = D(\\\\mathcal{G}^k) - A(\\\\mathcal{G}^k)$ denotes the graph Laplacian matrix (see lines 196-198 of our paper; here D(\\\\mathcal{G}^k) denoted the diagonal matrix containing the node degrees and A(\\\\mathcal{G}^k) is the adjacency matrix). For each of the graphs $\\\\mathcal{G}^k$, denote its Laplacian condition number $\\\\chi_k = \\\\frac{\\\\lambda_{\\\\max}(L(\\\\mathcal{G}^k))}{\\\\lambda_{\\\\min}^+(L(\\\\mathcal{G}^k))}$. Since the graph is connected, $\\\\chi < +\\\\infty$, and since $\\\\lambda_{\\\\max}(\\\\mathcal{G}^k)\\\\geq \\\\lambda_{\\\\min}^+(\\\\mathcal{G}^k)$ we have $\\\\chi_k\\\\geq 1$. Moreover, since the set of vertices is fixed, there is a finite number of different graphs on these vertices. It is straightforward to check that $\\\\chi = \\\\max_k \\\\chi_k$ satisfies Assumption 2.5 and $1\\\\leq\\\\chi < +\\\\infty$.\\n\\nAlso note that for a fully-connected graph we have $\\\\chi = 1$. One gossip iteration on a fully connected network corresponds to full averaging, which corresponds to Assumption 2.5. This explains why we do not impose that $\\\\chi > 1$.\", \"our_work_does_not_stick_to_taking_graph_laplacians\": \"Assumption 2.5 allows to use any other matrices that satisfy the gossip requirements.\"}", "{\"summary\": \"This manuscript addresses a decentralized stochastic optimization problem with a finite sample set at each node over time-varying networks. The authors propose two algorithms tailored for strongly convex and non-convex objective functions, respectively, and provide a lower bound analysis to discuss the optimality of the proposed algorithms.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This work provides rich theoretical analytical results for different objective function assumptions. By comparing with the proposed lower bound, it is shown that the proposed algorithm is optimal in their scenario;\", \"The lower bound in this paper takes into account the smoothness coefficient of each node, which is more refined.\"], \"weaknesses\": [\"While this work presents a set of results for distributed stochastic optimization problems, I found the paper difficult to follow, with unclear motivation and insufficient discussion on the necessity of this study. My specific concerns are as follows:\", \"On the significance of this work: The paper addresses time-varying topologies in distributed networks, but it is unclear why these topologies pose unique challenges for distributed algorithms. In my view, as long as Assumption 2.5 holds (i.e., the topology is connected at each iteration) and multi-round communication acceleration is employed, a time-varying topology does not seem substantially more challenging than that of fixed topology. Thus, this work appears to be tackling a corner case, especially considering that similar problems have been widely studied, e.g., Kovalev et al., 2021a, Luo and Ye, 2022, Huang and Yuan, 2022, Li and Lin, 2021. The authors need to further clarify the novelty and contribution of this work against these exisitng works.\", \"On the Assumptions on Smoothness: The assumptions regarding smoothness are somewhat confusing. The authors consider three types of smoothness but do not compare their differences or clarify which assumptions are strongest. Additionally, Assumption 2.1 requires that each sample\\u2019s objective function is smooth. It would be helpful to discuss whether this holds in typical machine learning tasks and if it is verifiable in practice.\", \"Regrading the proof of Theorem 3.2, compared to (Kovalev et al., 2021a), it seems to differ in only one constrained VR gradient estimation error (c.f. Lemma B.1), while the other proofs are almost identical to Kovalev et al., 2021a, differing only in some parameter choices. This makes the technical contribution of this paper vague.\", \"On Readability and Clarity: The paper\\u2019s readability could be significantly improved. In the algorithm design section, the authors rely heavily on prior literature to explain their algorithmic approach, offering limited unique insights. This reliance diminishes the perceived novelty of the work. Additionally, the paper contains many symbols that are either used before being defined or left undefined altogether (e.g., $\\\\lambda^{+}_{\\\\text{min}}$), making it challenging to follow.\"], \"questions\": [\"Why the two proposed algorithms require different function smoothness assumptions (c.f. Assumption 2.1, 2.2 and 2.3)?\", \"The LibSVM dataset used for the experiments in this work seems not adequate; the reviewer would like to know if the algorithm could be applied to more complex datasets such as cifar-10/100 to further validate the effect of the algorithm.\", \"How is the time-varying topology being implemented in the experiments?\", \"Why the reference Metelev et al. (2024) is not discussed in the main text? In fact, the time-varying topological sequences used in this work for obtaining lower bound adopt their strategy.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N.A.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies decentralized finite-sum optimization problem over time-varying graphs. Theorem 4.3 and Theorem 4.5 establish lower bounds on computational and communication complexities, resp., for strongly convex and non-convex optimization. Furthermore, the paper presented two algorithms ADOM+VR and GT-PAGE, resp., for strongly convex and non-convex optimization while comparing its performance with the state-of-the-art methods both analytically (see Tables 1 & 2) and numerically. Notably, GT-PAGE is optimal by achieving the lower complexity bounds while ADOM+VR is optimal in terms of communication iterations. Some open problems have also been highlighted. The numerical examples for the LibSVM dataset show the superior performance of the algorithm in terms of communication and computational complexities. Interestingly, the optimal algorithm GT-PAGE achieve presents better yet not strongly superior performance in comparison to the state-of-the-art methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper studies an important problem.\", \"The paper is well-written.\", \"Notation, assumptions, and results are presented clearly.\", \"The established lower bounds on the complexities and presenting optimal algorithms achieving these bounds are solid contributions.\", \"Numerical examples further strengthen the paper's claims.\"], \"weaknesses\": [\"The paper is fairly technical. To smoothen the paper's technical content and to improve the paper's accessibility, more qualitative discussions can be included.\", \"The introduction directly jumps to the optimization problem formulation. More motivating examples for the problem formulation would improve the paper.\", \"In the introduction, Tables 1 and 2 present the complexities in terms of parameters n, L, \\\\mu, and some others. However, it is not clear what these parameters, e.g., L and \\\\mu, refer to.\", \"Numerical examples do not provide error bars.\"], \"questions\": [\"Can the authors clarify whether there is no need to represent error bars across independent experiments to mitigate the impact of stochasticity in the numerical examples?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer to Reviewer 2Swy continues\", \"comment\": \"**References**\\n\\n[1] Scaman, Kevin, et al. \\\"Optimal algorithms for smooth and strongly convex distributed optimization in networks.\\\" international conference on machine learning. PMLR, 2017.\\n\\n[2] Kovalev, Dmitry, et al. \\\"ADOM: accelerated decentralized optimization method for time-varying networks.\\\" International Conference on Machine Learning. PMLR, 2021.\\n\\n[3] Li, Huan, and Zhouchen Lin. \\\"Accelerated gradient tracking over time-varying graphs for decentralized optimization.\\\" Journal of Machine Learning Research 25.274 (2024): 1-52.\\n\\n[4] Nedic, Angelia, Alex Olshevsky, and Wei Shi. \\\"Achieving geometric convergence for distributed optimization over time-varying graphs.\\\" SIAM Journal on Optimization 27.4 (2017): 2597-2633.\\n\\n[5] Kovalev, Dmitry, et al. \\\"Lower bounds and optimal algorithms for smooth and strongly convex decentralized optimization over time-varying networks.\\\" Advances in Neural Information Processing Systems 34 (2021): 22325-22335.\\n\\n[6] Li, Zhize, et al. \\\"PAGE: A simple and optimal probabilistic gradient estimator for nonconvex optimization.\\\" International conference on machine learning. PMLR, 2021.\\n\\n[7] URL: https://networkx.org/documentation/stable/auto_examples/drawing/plot_random_geometric_graph.html\"}", "{\"title\": \"Answer to Reviewer 2Swy\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your thorough review and comments. We are grateful that you acknowledged the strengths of our work.\\n\\n**Contribution of the work**.\\n\\nThe contributions of our paper are algorithms for decentralized finite-sum optimization over time-varying networks, optimal in the nonconvex case, as well as lower bounds. Previously methods were known for:\\n\\n* decentralized non-stochastic optimization over time-varying networks (Kovalev et al., 2021a, Li and Lin, 2021);\\n\\n* decentralized stochastic optimization without finite-sum structure over time-varying networks (Huang and Yuan, 2022);\\n\\n* finite-sum optimization over static networks (Luo and Ye, 2022).\\n\\nAs you can see, the combination of time-varying graphs and finite-sum structure has not been previously studied. Our work closes the gap in this direction (in the nonconvex case).\\n\\n\\n**Proofs identical to previous works**.\\nThe main technical challenge lies in the proof of GT-PAGE. Please see the common answer to Reviewers for details.\\n**Time-varying topology is not challenging**.\\n\\nWe respectfully disagree that *time-varying topology does not seem substantially more challenging than that of fixed topology*. Static and time-varying networks are two different classes of problems. A typical approach to decentralized problems is describing the constraint set $x_1 = \\\\ldots = x_m$ by an affine constraint involving gossip matrix $W$, i.e. $Wx = 0$. The main obstacle of time-varying networks, even if they stay connected at each iteration, is that matrix $W_k$ changes between iterations, thus changing the description of the constraint set. This obstacle requires new approaches for decentralized problems. Working with time-varying graphs requires additional techniques such as gradient tracking [3,4] and error feedback [2,5]. In fact, after optimal methods for static graphs were obtained in [1] it took four years for the community to obtain optimal methods for time-varying graphs [2].\\n\\n**Assumptions on Smoothness**.\\n\\nAssumption 2.1 holds, for example, for logistic regression. Consider one summand of form $f_{ij}(x) = \\\\log(1 + \\\\exp(b_{ij}\\\\langle a_{ij}, x\\\\rangle))$, where $x$ is the problem weight vector, $a_{ij}$ denotes the feature vector and $b_{ij}\\\\in\\\\{-1, 1\\\\}$ is the feature label. It can be checked (i.e. by computing the hessian) that $L_{ij} = \\\\|\\\\|a_{ij}\\\\|\\\\|^2 / 4$.\\n\\nFor the relation of smoothness constants, see line 177: $L\\\\leq \\\\overline{L}\\\\leq nL$, where $n$ is the dimension. This can be seen by triangle inequality and by the fact that for convex smooth functions $g$ and $h$ we have $L(g)\\\\leq L(g + h)$. Analogically it can be shown that $L\\\\leq \\\\hat L\\\\leq \\\\sqrt{n} L$. We will add the corresponding discussion to the revised version of the work.\\n\\n\\n**Paper readability**.\\n\\nIn the results section, we do describe previously known results. However, we believe that our explanations show the way of obtaining new results and think that this increases the readability of the paper. If we do not provide clarifications, our results might seem more challenging and unique, but this will be done at the cost of keeping the reader uninformed of the basics underlying the proposed algorithms. Summing up, we do not see a problem in explaining the existing results.\\n\\nThank you for pointing out the undefined symbols. We will correct this issue in the revised version of the paper.\\n\\n\\n**Questions**\\n\\n* Why the two proposed algorithms require different function smoothness assumptions (c.f. Assumption 2.1, 2.2 and 2.3)?**\\n\\nThe methods are based on their corresponding counterparts in decentralized optimization. In the nonconvex case, we use algorithm PAGE [6] that requires the average smoothness assumption similar to Assumption 2.3. In the strongly convex case, we adopt Katyusha [7] that uses worst-case constants as in Assumptions 2.1 and 2.2. In other words, such choice of assumptions is caused by backward compatibility.\\n\\n* The LibSVM dataset used for the experiments in this work seems not adequate; the reviewer would like to know if the algorithm could be applied to more complex datasets such as cifar-10/100 to further validate the effect of the algorithm.\\n\\nThank you for your suggestion. The choice of simple datasets and models enables to tune the step-sizes and other algorithm parameters according to theory.\\n\\n* How is the time-varying topology being implemented in the experiments?\\n\\nIt is implemented using random geometric graphs [7].\\n\\n* Why the reference Metelev et al. (2024) is not discussed in the main text? In fact, the time-varying topological sequences used in this work for obtaining lower bound adopt their strategy.\\n\\nThank you for noticing it. Indeed, this reference is used for the lower bounds and we mention it in line 1677 in the appendix. We will mention this paper in the main part of our work.\"}", "{\"summary\": \"This paper studies stochastic decentralized optimization problems of strongly-convex and non-convex settings over static and time-varying networks. For the strongly-convex setting, the ADOM+VR algorithm is proposed and the convergence property is established; For the non-convex setting, the GT-PAGE algorithm is proposed and the convergence property is established. The authors also study the lower bounds of both strongly-convex and non-convex decentralized optimization problems theoretically. The efficacy of the proposed algorithms is validated in simulations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The topic of the work is well aligned with ICLR community and it is significant to extend the lower bound analysis for decentralized optimization problems. The work also contains great amount of theoretical work.\", \"weaknesses\": \"1. The organization of the manuscript is not clear and different pieces (section 3.2, 3.3, 4.1) are isolated from each other. Since they are presented in this manuscript, it is assumed they have some internal relationship. For example, is the optimal non-convex algorithm, GT-PAGE, also optimal in the strongly-convex setting? Why is the strongly-convex algorithm based on ADOM+ algorithm while the non-convex algorithm is based on PAGE algorithm?\\n\\n2. It is concerned that the Assumption 2.5 is too strong, i.e., assuming the existence of $\\\\chi \\\\geq 1$ (also why $\\\\geq 1$ rather than $ > 1$?). This work studies time-varying networks and the matrices $W(k)$ change at every time step $k$. How is it guaranteed that such $\\\\chi$ exists? Are there any other requirements for the network connectivity or matrix construction to guarantee the existence of $\\\\chi$?\", \"questions\": \"1. The organization clarity (please see weakness-1). In order to better elaborate the contributions in this work, it is suggested that the authors re-write all bullet points in the contribution paragraph. For the ADOM+VR algorithm, please state the type of optimization problem/network setting and why it is optimal (is it applied to a different optimization problem or the convergence speed is faster than others); For the GT-PAGE algorithm, please also state the type of optimization problem and why it is optimal. It is not strong enough to only mention the elements of different algorithms in the contribution.\\n\\n2. The achievability of Assumption 2.5 (please see weakness-2).\\n\\n3. In experiments, are those plots presenting training set performance or testing set performance? It is expected both training and testing performance are shown. Moreover, it would be good to show classification accuracy on test set.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer to Reviewer SSdw\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your time and effort while reviewing our paper. Below we answer the questions you have raised.\\n\\n**Incremental contribution and niche setting.**\\n\\nPlease see the common answer to Reviewers.\\n\\n**Algorithms use multi-consensus and may perform badly in asynchronous setting.**\\n\\nOur algorithms are not designed for the asynchronous setting, but still GT-PAGE is optimal in its setup. Multi-step communication is a typical trick in decentralized optimization [1, 2]. In our understanding, this trick is mainly needed to eliminate the additional factor $\\\\chi$ in the number of gradient calls, i.e. the trick is mostly theoretical. Anyway, using this technique leads to optimal complexities, which is effective, as you mentioned. Moreover, Acc-GT [5], which is an optimal method for time-varying networks, has a network dependence $\\\\chi^{3/2}$ without multi-consensus. Finally, you mentioned that our methods are *unlikely to extend well* to asynchronous setup, but this fact cannot be quickly checked in our discussion.\\n\\n**DADAO**\\n\\nThank you for pointing out the DADAO paper. We understand your comment that in the asynchronous setup DADAO is *likely to lead* to better solutions and *may* close the complexity gap. But please see that our paper already provides methods and closes the gap in the nonconvex setting. Therefore, we acknowledge that other approaches are possible but do not see it as a weakness of our work.\\n\\n\\n**Questions**\\n\\n* Multi-consensus matrix.\\n\\nThank you for raising a question on multi-consensus. It was meant that each $W(k)$ is replaced by a product of $T$ consequent matrices $\\\\tilde W(kT, T) = W(kT + (T - 1)) W(kT + (T - 2))\\\\ldots W(kT)$ that corresponds to $T$ consequent communication rounds.\\n\\n\\n\\n**References**\\n\\n[1] Scaman, Kevin, et al. \\\"Optimal algorithms for smooth and strongly convex distributed optimization in networks.\\\" international conference on machine learning. PMLR, 2017.\\n\\n[2] Kovalev, Dmitry, et al. \\\"Lower bounds and optimal algorithms for smooth and strongly convex decentralized optimization over time-varying networks.\\\" Advances in Neural Information Processing Systems 34 (2021): 22325-22335.\\n\\n[3] URL: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html\\n\\n[4] URL: https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/linear_model/_sag.py#L87 \\n\\n[5] Li, Huan, and Zhouchen Lin. \\\"Accelerated gradient tracking over time-varying graphs for decentralized optimization.\\\" Journal of Machine Learning Research 25.274 (2024): 1-52.\"}", "{\"title\": \"Answer to Reviewer CnVV continues\", \"comment\": \"**Questions**\\n\\n* The organization clarity (please see weakness-1). In order to better elaborate the contributions in this work, it is suggested that the authors re-write all bullet points in the contribution paragraph. For the ADOM+VR algorithm, please state the type of optimization problem/network setting and why it is optimal (is it applied to a different optimization problem or the convergence speed is faster than others); For the GT-PAGE algorithm, please also state the type of optimization problem and why it is optimal. It is not strong enough to only mention the elements of different algorithms in the contribution.\", \"see_line_053_describing_gt_page\": \"\\u201c*For nonconvex decentralized optimization over time-varying graphs, we propose an optimal algorithm GT-PAGE (Algorithm 2)*\\u201d. This line contains the type of optimization problem: *nonconvex decentralized optimization* and the network setting: *time-varying graphs*. The meaning of optimality is the similarity of complexity bounds of the method and lower complexity bounds for the problem class (see our answer to Weakness 1). We will add the discussion on the definition of optimality in the revised version of the paper. Please also see the common answer to Reviewers.\\n\\nFor ADOM+VR, we will update the text as follows: \\u201c*We propose a method for **strongly convex** decentralized finite-sum optimization over time-varying graphs ADOM+VR (Algorithm 1)*\\u201d.\\n\\n* The achievability of Assumption 2.5 (please see weakness-2).\\n\\nSee answer to weakness 2.\\n\\n\\n* In experiments, are those plots presenting training set performance or testing set performance? It is expected both training and testing performance are shown. Moreover, it would be good to show classification accuracy on test set.\\n\\nThe plots show only the training set error. Since our theory only covers the convergence speed of the algorithms and does not touch generalization properties, we decided to illustrate our findings by running the methods on the train set and plotting the optimality measure (distance to optimum or gradient norm) and not cover quality metrics such as accuracy.\\n\\n\\n**References**\\n\\n[1] Nedic, Angelia, Alex Olshevsky, and Wei Shi. \\\"Achieving geometric convergence for distributed optimization over time-varying graphs.\\\" SIAM Journal on Optimization 27.4 (2017): 2597-2633.\\n\\n[2] Kovalev, Dmitry, et al. \\\"Lower bounds and optimal algorithms for smooth and strongly convex decentralized optimization over time-varying networks.\\\" Advances in Neural Information Processing Systems 34 (2021): 22325-22335.\\n\\n[3] Nesterov, Yurii. Lectures on convex optimization. Vol. 137. Berlin: Springer, 2018.\"}", "{\"summary\": \"This paper studies decentralized finite-sum optimization over time-varying networks for smooth objectives.\\n\\nIn the strongly-convex setting, the ADOM+VR algorithm consists in mixing the ADOM+ (accelerated decentralized optimization for time-varying graphs) and Katyusha (Accelerated variance-reduced single-machine) algorithms. Similarly, the lower bound mixes the ones from ADOM+ and ADFS. \\n\\nIn the non-convex setting, gradient tracking (a standard way to obtain \\\"exact\\\" decentralized algorithms) is combined with the PAGE VR algorithm. Similarly, the lower bound mixes that of Yuan et al (2022) and ADOM+. \\n\\nIn both cases, combining existing optimal approaches yields (almost) matching upper and lower complexity bounds. Toy experiments are given to illustrate the practical performances.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"the overall approach is clear and natural\", \"(almost) matching upper and lower bounds\", \"results for both strongly convex and non-convex settings\"], \"weaknesses\": [\"Contributions are incremental: mostly combining accelerated variance-reduced estimators with accelerated decentralized algorithms over time-varying graphs (or gradient tracking for the non-convex setting)\", \"The setting (finite-sum decentralized optimization over time-varying graphs) is rather niche\", \"The algorithms are quite rigid, with long communication stages (through $W(k)$) and long computation stages (through mini-batching), which is effective but not very elegant, and unlikely to extend well to, e.g., asynchronous settings. Approaches like DADAO (Nabli and Oyallon, 2023) are likely to lead to better solutions, as well as maybe close the gap between lower and upper bounds in the strongly-convex setting.\", \"The non-convex algorithm has a pretty bad dependence on the graph constants ($\\\\chi^3$), and is only saved by multi-consensus steps (which again, would likely break in asynchronous settings for instance).\"], \"questions\": [\"It is said in Corollary 3.3 that the number of communications per iteration is of order $\\\\chi$, but in Algorithm 1 there is only one communication per iteration (matrix $W(k)$ is used). I understand that we should use Algorithm 1 with the multi-consensus matrix $W(k, \\\\kappa)$ instead, is that true? This should be clarified.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"While the authors see some possible merit in this work, after the rebuttal and discussion most of the reviewers still view the paper as being below the acceptance threshold (two weakly and one strongly). There most common recurring concern was the limited technical novelty, and there were also some suggestions on other issues such as paper organization and providing more metrics in experiments. There could be some routes to strengthening the paper, e.g., handling other settings such as relaxing synchronicity, but the consensus is not to accept in the current form.\", \"additional_comments_on_reviewer_discussion\": \"The discussion was on the low side, but the reviewers confirmed that they read the rebuttal and still maintain their recommendation.\"}", "{\"title\": \"Answer to Reviewer JkLV\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your thorough review and comments. We are grateful that you pointed out the strengths of our paper.\\n\\n**Qualitative discussions**.\\n\\nThank you for your suggestion. In the revised version of the paper, we will add a discussion to stress the paper contributions.\\n\\n**Smoothness parameters in Tables 1 and 2**.\\n\\nWe will add clarifications and links to Assumptions in Section 2 in the annotations to Tables 1 and 2.\\n\\n**Motivating examples**.\\n\\nTypically in optimization literature one writes out the problem formulation right away. The examples are usually given in the numerical experiments section, like in our paper.\\n\\n**Error bars in numerical experiments**.\\n\\nThank you for pointing this out. The error bars could be added to the plots, but this is not very typical in optimization papers.\"}" ] }
C53FwQZigu
IPSeg: Image Posterior Mitigates Semantic Drift in Class-Incremental Segmentation
[ "Xiao Yu", "Yan Fang", "Yunchao Wei", "Yao Zhao" ]
Class incremental learning aims to enable models to learn from sequential, non-stationary data streams across different tasks without catastrophic forgetting. In class incremental semantic segmentation (CISS), the semantic content of the background class changes across incremental phases, which is known as \textbf{semantic drift}. Our research identifies two severe issues within semantic drift: separate optimization and noisy semantics, which significantly degrade CISS performance. Based on this insight, we propose a simple yet effective method, \textbf{I}mage \textbf{P}osterior and Semantics Decoupling for \textbf{Seg}mentation (IPSeg), designed to address these challenges through two specific mechanisms. First, IPSeg leverages image posterior probabilities as guidance to resolve the separate optimization issue. Second, IPSeg utilizes semantics decoupling to effectively handle noisy semantics and tailor the learning strategies for different types of knowledge. Experiment results on the Pascal VOC 2012 and ADE20K datasets demonstrate superior performance compared to previous state-of-the-art approaches, particularly in more realistic and challenging long-term scenarios. Furthermore, IPSeg exhibits excellent properties in terms of both learning plasticity and memory stability.
[ "Incremental Learning", "Semantic Segmentation" ]
Reject
https://openreview.net/pdf?id=C53FwQZigu
https://openreview.net/forum?id=C53FwQZigu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zvobOrInJU", "ykKOF1bOSK", "tLmWCfbAYU", "t691iREL1q", "qxYNx3OoE1", "paqRqywCHf", "oLcQ4uk8jT", "n4xFWUtMtV", "jNcNjYh3Jv", "izXabHaeHA", "YqQjir3rTl", "X0YIgNQhpp", "VP60GBJeMm", "VMjqYdOmnS", "TzFffkoWqr", "Sdh8YHtowE", "LEeHxH0xcr", "KOXoAh6imR", "HSczHlUZQa", "HLtwHIIk7M", "GLkgZ046ES", "CtD1rIM6LC", "8iySJvfxkj", "8Pxen4ijdm", "5oJosOMUV4", "0LpmFWMzJW" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732800003495, 1732355084786, 1732292657862, 1733312953757, 1730585629313, 1733129838084, 1732768394518, 1737523412694, 1732293334738, 1733221312269, 1732292745812, 1732355706953, 1730721631692, 1733115822091, 1732768744460, 1732293481553, 1732860524913, 1732293249970, 1730659751434, 1734749643150, 1732293061592, 1733312995863, 1730615957570, 1732293730690, 1732355871527, 1733129945135 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission731/Reviewer_zxHa" ], [ "ICLR.cc/2025/Conference/Submission731/Reviewer_zxHa" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Reviewer_FU4w" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Reviewer_wQeR" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Reviewer_zxHa" ], [ "ICLR.cc/2025/Conference/Submission731/Reviewer_zxHa" ], [ "ICLR.cc/2025/Conference/Submission731/Reviewer_k5js" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Reviewer_wQeR" ], [ "ICLR.cc/2025/Conference/Submission731/Area_Chair_BNMR" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Reviewer_k5js" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ], [ "ICLR.cc/2025/Conference/Submission731/Reviewer_wQeR" ], [ "ICLR.cc/2025/Conference/Submission731/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the authors' response.\\nThe authors have clearly demonstrated that image-level classification suffers less forgetting than semantic segmentation, which verifies the motivation of IPSeg. Additionally, the authors show that IPSeg can perform better than PLOP + NeST using the same training epochs. Hope these results can appear in the revision and I decide to raise my rate from 6 to 8.\"}", "{\"comment\": \"Thanks for the authors' response.\\nRegarding the first point, \\\"Compared to fine-grained task heads, it is easier for the subnetwork to learn image-level knowledge from memory buffer,\\\" I wonder how the authors reached this conclusion. Would you please report the classification accuracy of two tasks (image-level classification and pixel-level classification)? I understand that the pixel-level classification may have lower accuracy if you count pixel-level accuracy. So, please calculate the image-level accuracy for both tasks. For the segmentation head, if a pixel belongs to class $C$, then the image belongs to class $C$. I hope you demonstrate that the image-level classification can suffer less forgetting than the segmentation, which is my core concern. Otherwise, I can not understand why the image-level classification can alleviate forgetting in segmentation.\"}", "{\"title\": \"Response to Reviewer zxHa (1/2)\", \"comment\": \"Thank you for your positive feedback and constructive comments. In regard to the weaknesses and questions raised, we provide our point-to-point responses below:\\n\\n---\\n**Q1:** Why the classification subnetwork does not suffer from catastrophic forgetting? Is there any evaluation of the performance of the classification subnetwork?\\n\\n**A1:** \\n**Evaluation on classification subnetwork**: We provide an evaluation of the performance of the classification subnetwork after all steps as below. We hold three experiment settings: \\\"**Seq**\\\" refers to training the subnetwork sequentially without any additional tricks, indicating the worst case suffering from catastrophic forgetting, \\\"**Ours**\\\" refers to the training setting used in our paper, and \\\"**Joint**\\\" refers to training with all task data jointly as the upper bound.\\n| | Precision| Recall | F1 |\\n| :--------: |:--------: | :--------: |:---: |\\n| Seq | 55.62% | 24.67% | 23.78% |\\n| Ours | 78.28% | 87.03% | 80.68% |\\n| Joint | 89.96% | 90.00% | 89.89% |\\n\\n**Analysis**: It is obvious that the classification subnetwork suffers little forgetting. We achieve this performance mainly owing to two specific settings: \\n1. **Mixed training data**: As mentioned in L204-207 (L206-209 in the revised version), the classification subnetwork uses data from both the memory buffer and the current task dataset for training. Compared to fine-grained task heads, it is easier for the subnetwork to learn image-level knowledge from memory buffer.\\n2. **Image-level pseudo-label for supervision**: As claimed in L210-213 (L212-214 in the revised version), IPSeg introduces image-level pseudo-label on past classes as supervision to mitigate catastrophic forgetting challenge. The ablation result in Table 4 partially reflects the effectiveness of this design.\\n\\n---\\n**Q2:** What is the pre-trained knowledge of the backbone from, the first training step or the ImageNet?\\n\\n**A2:** Both. We use the ImageNet pretrained Resnet-101 and Swin-B to intilize our backbone. Then, we finetune the backbone at the first step and freeze it in the subsequent steps following previous works [1-2].\\n\\n---\\n**Q3:** The differences between the mechanism of image posterior probability and transformer-based mask prediction, like Incrementer. \\n\\n**A3:** Thanks for your reminder. We list the commons and differences between Incrementer and IPSeg as follows:\", \"commons\": \"1. Mask generation: The processes of mask generation in Incrementor and IPSeg are similar. They all generate mask prediction by fusing dense visual information with sparse class information.\\n2. Class information: The class tokens in Incrementer and image posterior probabilities in IPSeg both contain classes information.\", \"differences\": \"1. Inplementation details: Incrementer computes cosine similarity between visual embedding and class embedding to generate mask prediction. While IPSeg multiplies dense pixel-level class predictions with image-level class predictions to generate the final outputs.\\n2. Mechanism: In Incrementer, class embeddings are responsible for class information while visual embeddings are class agnostic. But in IPSeg, both the pixel-level and image-level predictions are class-related, and the latter plays a role in introducing image-level class guidance to rectify the former.\"}", "{\"title\": \"Further Response to wQeR (1/2)\", \"comment\": \"Dear reviewer,\\n\\nThanks for your timely response and your interests on our work. To your concerns and questions, here we explain and answer them one by one.\\n\\n---\\n\\n**C1:** Could the author explain the underlying reasons? As mentioned by Reviewer zxHa, the image-level classification (IP) suffers less forgetting than the segmentation is regarded as the key contribution of this paper. If this is the case, I believe IPSeg w/o M should definitely outperform LGKD+PLOP on ADE20K without any memory. This is my main concern, and I hope the authors can provide convincing response.\\n\\n\\n**A1:**\\nIt is not entirely fair to compare our work directly with LGKD without considering their respective focuses and properties. These two methods address different challenges in CISS. Specifically, our method is optimized by leveraging a memory buffer to address the separate optimization challenge. The key features of our work are outlined below:\\n\\nFirstly, we emphasize the data-replay version of IPSeg as the main experimental result in our paper (Table 1 and Table 2) because **the Image Posterior (IP) Branch is originally designed to operate with a memory buffer**:\\n- The IP branch is supervised using $\\\\mathcal{Y}^{t}\\\\_i \\\\cup Y\\\\{(\\\\phi_{1:t-1}(h_\\\\theta(x^{m,t}\\\\_i)))\\\\}$ , where the latter term $Y\\\\{(\\\\phi_{1:t-1}(h_\\\\theta(x^{m,t}_i)))\\\\}$ is derived from pixel predictions generated by segmentation heads. The accuracy of these segmentation heads relies significantly on the memory buffer. This design is ablated in Table 4.\\n- Additionally, the samples stored in the memory buffer provide accurate image-level supervision for past classes, enabling the IP branch to suffer less forgetting.\\n\\nSecondly, **removing the memory buffer leads to degradation.** Without the memory buffer, the IP branch experiences degradation in image classification, which, combined with the degradation of the segmentation branch, leads to an overall performance decline in IPSeg w/o M. \\n\\n**In summary, IPSeg is fundamentally designed with memory buffer, which has a critical contribution to our work in effectively mitigating forgetting. And the data-free version of IPSeg (IPSeg w/o M) is not the core contribution of our work but rather a supplementary component.** Instead, we present it as an alternative option for scenarios requiring privacy protection.\\n\\n---\\n\\n**C2:** The computational cost is considerable compared to SSUL (137.1G vs. 94.9G), indicating a 44.5% increase. I am not convinced by the cost-efficiency.\\n\\n**A2:**\\nOur work truly introduces higher computational cost compared to our baseline method. **But the cost-efficiency also needs to be considered in comparison with the sota method, where IPSeg offers significant advantages in training cost while maintaining comparable inference speed (FPS) and compute cost (FLOPs)**. This can be found in our cost-efficiency results in official comment. \\n\\n**Compared with the baseline, our additional branch increases the model's FLOPs, but it has minimal impact on inference speed (FPS).** The increase in FLOPs mainly stems from IPSeg\\u2019s use of image-level predictions to guide final outputs. Specifically, IPSeg broadcasts image-level predictions to match the shape of pixel-level logits and combines them through element-wise multiplication. Although this introduces dense computational operations, these are inherently parallelizable and can be extensively optimized and accelerated by GPUs, ensuring that the inference speed remains largely unaffected. **Besides, relying solely on a single metric to evaluate cost is neither comprehensive nor objective.** Inference speed (FPS) is a crucial factor that must also be considered, as it serves as the most direct measure of real-time ability. \\n\\n**Moreover, it can not be ignored that IPSeg achieves substantial performance improvements with only a modest increase in inference costs compared to the baseline.** According to the reported results, IPSeg operates at an FPS of 27.3, which is approximately 6 FPS lower than SSUL-M's 33.7, and requires 6.2G of GPU memory compared to SSUL-M\\u2019s 5.3G. However, IPSeg delivers a remarkable overall mIoU of **81.5**, significantly surpassing SSUL-M\\u2019s 71.9, representing a **9.6 improvement in performance**.\"}", "{\"summary\": \"To address the challenging Class incremental semantic segmentation (CISS), this paper proposes Image Posterior to attack the noisy semantics and Semantic Decoupling to cope with separate optimization, which both of these two issues refer to semantic drift and background shift problems in CISS. The proposed Image posterior supports with guidance to calibrate the error prediction and amplify the scale of correct prediction, hence solving the semantic drift issue when the model faces ambiguous categories between the past task and the current task. Semantic Decoupling implemented by Memory Bank manner is used to decouple the pure background, unknown foreground, past class pixels, and target class pixels. The authors employ a saliency estimator and filtering trick to improve the memory buffer while the memory buffer is an image-level corpus.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper writing is kind of good.\\n\\n2. Supporting image posterior within the image-level to guide the CISS task sounds reasonable to resolve the semantic drift and works like error calibration.\\n\\n3. The final CISS experimental results seem competitive and the ablative studies are extensive.\", \"weaknesses\": \"1. Decoupling the background into pure background and unknown foregrounds (utilizing the saliency model), I believe, has been widely studied. The decoupling implementation sounds complicated to understand in the main paper, but I think the authors could provide a more clear presentation of this part. I think the authors should make detailed comparisons with existing works, like Ssul.\\n\\n2. Not sure how the authors construct the memory buffer with image-level samples. This plays an important role in this paper to obtain good guidance.\\n\\n3. The figure presentations are inconsistent, Figure shows good quality while figures 2/4/5/6/7/8 display a blur effect. These should be presented by pdf, etc.\", \"questions\": \"Please refer to my weaknesses part. Moreover, the authors shall further make some comparisons in terms of training efficacy with the previous method, since they use a memory buff and constitute a two-branches architecture.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer wQeR\", \"comment\": \"Dear Reviewer wQeR.\\n\\nAs the author-reviewer discussion period is nearing its end, and since other reviewers have actively engaged in discussions, we would greatly appreciate it if you could review our responses to your comments at your earliest convenience.\\n\\nThis will allow us to address any further questions or concerns you may have before the discussion period ends. If our responses satisfactorily address your concerns, please let us know. Thank you very much for your time and effort!\\n\\nSincerely,\\n\\nThe Authors of Submission #731\"}", "{\"title\": \"Response to Reviewer zxHa\", \"comment\": \"**Q1:** please calculate the image-level accuracy for both tasks.\\n\\n**A1:**\\nBased on your concerns, we hold two experiments to answer your questions. \\n\\n1. First, we evaluate the image level accuracy performance of the base 15 classes using the image posterior (IP) branch and the segmentation (Pixel) branch at each step on Pascal VOC 15-1 to further investigate their **forggetting** on base classes. \\\"IP\\\" refers to only using the IP branch, \\\"Pixel\\\" refers to only using the segmentation branch, where the class $\\\\mathcal{C}$ exists if a pixel is predicted as $\\\\mathcal{C}$. \\\"Pixel+IP\\\" denotes using them both as our paper does.\\n\\n| Base Classes ACC (%) |step 0| step 1| step 2 | step 3 |step 4|step 5|\\n| -------- | -------- | -------- |--- |-|-|-|\\n| IP | 87.44|86.41|86.99|86.82|86.86|86.29|\\n|Pixel|88.17|86.42|86.30|85.43|84.84|84.70|\\n|Pixel+IP|93.07|92.24|92.41|91.93|91.95|91.02|\", \"the_ablation_shows_that\": \"**the image-level classification (IP) suffers less forgetting than the segmentation (Pixel)**, and our method (Pixel+IP) shows similar property against forgetting with the help of IP.\\n\\n2. Additionally, we also evaluate the image-level accuracy on all seen classes at each step to analyze their performance on both keeping old knowledge and learning new knowledge.\\n\\n| Seen Classes ACC (%) |step 0| step 1| step 2 | step 3 |step 4|step 5 (Final)|\\n| -------- | -------- | -------- |--- |-|-|:-:|\\n| IP | 87.44|82.54|81.14|81.32|82.09|**82.34**|\\n|Pixel|88.17|83.56|82.29|78.23|77.60|**76.57**|\\n|Pixel+IP|93.07|90.05|90.13|87.30|87.68|**88.03**|\\n\\nFor the segmentation branch, the image-level accuracy of it on all seen classes gradually degrades after learning new classes, performing worse than its accuracy on base classes. This indicates the segmentation branch performs poorly on new classes, which is consistent with our description about separate optimization (L160-161 and L180-182 in the revised manuscript). In contrast, the IP branch experiences less deterioration from separate optimization and help our method maintain a good balance between retaining old knowledge and learning new knowledge. Therefore, the ablation evidently shows that **the IP branch learns image-level knowledge better than the segmentation branch**.\\n\\nIn summary, these experiments demonstrate that, on one hand, **the image classification (IP) branch exhibits higher accuracy** after all steps and **suffers less forgetting**. On the other hand, **the IP branch mainly helps our method mitigate the sparate optimization, effectively improving overall performance**.\\n\\nPlease let us know whether our response solve your questions and concerns.\\n\\n---\\n\\n**Q2:** For Q6, the authors' response is not so convincing. I suggest that the authors retrain the proposed IPSeg in the 50-50 setting using any preferred training schedule to show that the performance can be further improved.\\n\\n**A2:** Thanks for your suggestion. We run an experiment on ADE20K 50-50 setting with a longer schedule as same as PLOP+NeST uses and report the result as below. Using the same training schedule, our method shows slight advantage over NeST.\\n\\nADE20K 50_50|0-50|51-150|all|\\n|-|-|:-:|-|\\nPLOP+NeST(75 epochs)|48.7|27.7|34.8|\\nIPSeg(60 epochs)|47.3|26.7|33.6|\\nIPSeg(75 epochs)|47.7|28.7|35.1|\\n\\n---\\n\\n**Q3:** Regarding Q5, Grounding SAM can not be used, because it has data leakage when using all class names as prompts. This is only my reminder, not my concern.\\n\\n**A3:** The SAM model segments all regions of a given image indiscriminately, making it difficult to distinguish between foreground and background areas. Consequently, this poses a challenge for using SAM as salient maps within our methods. To address this issue, we employ the Grounded SAM model instead but introduce information leakage problem as you mentioned. We will further explore related techniques. Thanks for your kind reminder.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer k5js\", \"comment\": \"Thank you for your positive comments and recommendation to our work. We look forward to any further discussions and will make all essential improvements based on your and other reviewers' feedbacks.\\n\\nBest wishes!\"}", "{\"comment\": \"I would like to thank the authors for their efforts in providing additional experiments on ADE20K. However, the results bring me some concerns about the key contribution \\\"the image-level classification (IP) suffers less forgetting than the segmentation\\\".\\n\\n## Concerns:\\n\\n- **IPSeg w/o M** performs **inferior** compared to **LGKD+PLOP** on the three standard settings **ADE20K 100-10**, **100-50** and **50-50**, when no memory is used. Could the author explain the underlying reasons? As mentioned by **Reviewer zxHa**, **the image-level classification (IP) suffers less forgetting than the segmentation** is regarded as the **key contribution** of this paper. If this is the case, I believe **IPSeg w/o M** should definitely outperform **LGKD+PLOP** [5] on ADE20K **without any memory**. This is my main concern, and I hope the authors can provide convincing response.\\n\\n\\n| Model | Memory | Backbone | Architecture | |100-10 || |100-50| | |50-50 || \\n|------------------|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n| | | | | *0-100* | *101-150* | *all* | *0-100* | *101-150* | *all* | *0-100* | *101-150* | *all* | *0-50* | *51-150* | *all* |\\n| LGKD+PLOP [5] | \\u00d7 | Resnet-101 | Deeplab V3 | **42.1** | 22.0 | **35.4** | **43.6** | **25.7** | **37.5** | **49.4** | **29.4** | **36.0** |\\n| IPSeg w/o M | \\u00d7 | Resnet-101 | Deeplab V3 | 41.0 | **23.6** | 35.3 | 41.3 | 24.0 | 35.5 | 46.7 | 26.2 | 33.1 |\\n\\n---\\n- The computational cost is considerable compared to SSUL (**137.1G vs. 94.9G**), indicating a **44.5\\\\% increase**. I am not convinced by the cost-efficiency.\\n\\n---\\n## Questions:\\n\\n- From my perspective, ECLIPSE does not suffer from error propagation (L157). Could the reviewer comment on this?\\n- The subscript $\\\\phi_{1: t-1}\\\\left(h_\\\\theta\\\\left(x_i^{m, t}\\\\right)\\\\right)$ used for $\\\\tilde{\\\\mathcal{Y}}$ is inconsistent to the subscript $i$ used for $\\\\mathcal{Y}^t$. I suppose i to be the i-th input image. However, $\\\\phi_{1: t-1}\\\\left(h_\\\\theta\\\\left(x_i^{m, t}\\\\right)\\\\right)$ is related to the class index predicted by previous heads.\\n\\n---\\n[5] Yang, Ze, et al. \\\"Label-guided knowledge distillation for continual semantic segmentation on 2d images and 3d point clouds.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\", \"title\": \"Concerns with the Key Contribution\"}", "{\"title\": \"Response to Reviewer zxHa (2/2)\", \"comment\": \"**Q4:** The training and inference costs, such as training time and memory.\\n\\n**A4:** Please refer to our official comment, we provide a detailed and comprehensive analysis on training and inference cost, thanks.\\n\\n---\\n\\n**Q5:** The effect of the salient maps supervision and the quality of the salient maps on the ADE20K dataset.\\n\\n**A5:** It indeed poses a challenge when using saliency maps on ADE20K to identify target regions, specifically for objects of small scale and at the edges of images. According to your concerns, we conduct the ablation study on the ADE20K 100-10 task with the following settings: \\\"**w/o Sal**\\\" uses no saliency map supervision, \\\"**w/ Sal**\\\" uses saliency map supervision as we do in paper, and \\\"**w/ SAM**\\\" uses saliency maps extracted by SAM [3] model. We use the official Grounded SAM [4] code with all class names in ADE20K as text prompts to extract corresponding masks in implementation. Additionally, we also report performance differences on \\\"thing\\\" and \\\"stuff\\\" classes defined in ADE20K panoptic segmentation[5] to investigate the bias of saliency maps on different semantic regions.\\n\\n| Method | 0-100 | 101-150 | All | Things |Stuff |\\n| :--------: | :-: | :-: | :-: |:-: |:-: |\\n| w/o Sal | 42.1 | 28.0 | 37.4 |37.2 |37.5 |\\n| w/ Sal | 43.0 | 30.9 | 39.0 |39.1 |37.6 |\\n| w/ SAM | 43.6 | 31.8 | 39.7 |39.7 |38.7 |\\n\\nBasically, we find the following conclusions:\\n1. Using the default saliency map supervision to implement knowledge decoupling strategy, IPSeg gets a performance improvement by $+1.6$% mIoU. And using saliency maps extracted by SAM further improves IPSeg performance by $+0.7$% mIoU.\\n2. The default setting performs well in identifying \\\"Things\\\" classes but struggles with \\\"Stuff\\\" classes, resulting in a performance gain of $+1.9$% on \\\"Things\\\" classes but merely $+0.1$% on \\\"Stuff\\\" classes. Furthermore, the SAM-based saliency maps provide better supervision for both \\\"Things\\\" and \\\"Stuff\\\" classes, with improvements of $+0.6$% on \\\"Things\\\" and $+1.1$% on \\\"Stuff\\\" compared to \\\"w/ Sal\\\".\\n\\nIn summary, the analysis result confirms that the quality of saliency maps affects model performance on the ADE20K dataset. We will report this conclusion in our supp to inspire future works.\\n\\nIn our paper, IPSeg uses the same saliency map extraction method without more information leakage to ensure a fair comparison with baseline methods. Despite this, IPSeg still achieves SOTA performance compared to SOTA methods like MicroSeg and CoinSeg, which utilize region proposals from more advanced models like Mask2Former as auxiliary information. Thanks for your interesting suggestion.\\n\\n\\n\\n---\\n**Q6:** Why is the replay-based IPSeg (33.6) worse than data-free-based PLOP+NeST (34.8)? Is there any insight or explanation?\\n\\n**A6:** We find the PLOP+NeST outperforms IPSeg in the ADE20K 50-50 setting mainly because of two reasons:\\n\\n1. **Unfair Training Epochs**: Compared with IPSeg, NeST [6] requires additional **15** warm-up epochs to initilize new classifiers for each step. These extra training epochs allow NeST to better adapt to new classes, leading to improved performance.\\n2. **Characteristics of the 50-50 Task**: NeST aligns new classifiers with the backbone and adapts to the new class data with extra **15** warm-up epochs, which relies on sufficient training data. In the \\\"50-50\\\" or \\\"100-50\\\" settings, there are a large amount of new classes and training data to help NeST better warm up. However, in long-term challenging tasks such as \\\"100-5\\\" and \\\"100-10\\\", there is not enough new task data to achieve similar warm-up effect, and its performance is not ideal as in \\\"50-50\\\" setting.\\n\\nGenerally speaking, the performance advance on the ADE20K 50-50 setting is highly related to the inconsist training setting in NeST.\\n\\n---\\n\\n[1] SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning, NeurIPS 2021.\\n\\n[2] Mining Unseen Classes via Regional Objectness: A Simple Baseline for Incremental Segmentation, NeurIPS 2022.\\n\\n[3] Segment Anything, ICCV 2023.\\n\\n[4] Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks, Arxiv 2024.\\n\\n[5] Semantic Understanding of Scenes Through the ADE20K Dataset, IJCV 2019.\\n\\n[6] Early Preparation Pays Off: New Classifier Pre-tuning for Class Incremental Semantic Segmentation, ECCV 2024.\"}", "{\"comment\": \"Regarding Q5, Grounding SAM can not be used, because it has data leakage when using all class names as prompts. This is only my reminder, not my concern.\\nFor Q6, the authors' response is not so convincing. I suggest that the authors retrain the proposed IPSeg in the 50-50 setting using any preferred training schedule to show that the performance can be further improved.\"}", "{\"summary\": \"This paper indicates two key issues within semantic drift, separate optimization, and noisy semantics, for class-incremental segmentation. The authors propose a method called IPSeg, including image posterior probabilities and semantics decoupling. This paper conducts extensive experiments on replay-based and non-replay-based scenarios and shows significant improvement under extremely long incremental learning steps.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper identifies two issues in the semantic drift for class-incremental segmentation and proposes two components for each problem, respectively. Extensive experiments demonstrate the effectiveness of the proposed method.\", \"weaknesses\": \"The writing and clarification could be simplified in Section 3.3 and Section 3.4, especially for Equation 5. In Figure 2, the targets of the permanent and temporal branches are not clear.\", \"questions\": [\"This paper uses image posterior probability to rectify the class prediction. However, the subnetwork for image-level classification is trainable during different steps. I wonder why the classification subnetwork does not suffer from catastrophic forgetting. Is there any evaluation of the performance of the classification subnetwork during multiple steps?\", \"In Figure 2, the feature extractor is frozen. What is the pre-trained knowledge of the backbone from, the first training step or the ImageNet?\", \"The mechanism of image posterior probability is similar to the way used in transformer-based mask prediction, like Incrementer [1]. Is there any comparison and discussion with it?\", \"Because this work modifies the inference procedure of the mask, please report the training and inference costs, such as training time and memory.\", \"This method uses salient maps derived from an off-the-shelf model as one of the targets. The salient maps are suitable and useful for the VOC dataset because most objects can be detected by the salient model. However, for the more challenged dataset, ADE20K, many semantic regions can not be identified by the salient model, such as grassland in the supplementary material Figure 9. Please study the effect of the salient maps supervision and the quality of the salient maps on the ADE20K dataset. A suggestion: you may replace the salient maps with the results of the SAM model [2].\", \"In Table 2, why is the replay-based IPSeg (33.6) worse than data-free-based PLOP+NeST (34.8)? Is there any insight or explanation?\", \"[1] Incrementer: Transformer for class-incremental semantic segmentation with knowledge distillation focusing on old class, CVPR 2023\", \"[2] Segment Anything, ICCV 2023\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"After checking all the reviews and rebuttals, I maintained the original rating. I strongly recommend that the authors improve the final version based on the reviews.\"}", "{\"title\": \"Response to Reviewer wQeR\", \"comment\": \"**Q:** Please do not list the results without denoting whether memory is used or not in the same table in A5. It is not fair to do so. Additionally, I would like to request for Data-Free IPSeg w/o M results on ADE20k.\\n\\n**A:** Thanks for your tips. Here, we provide the results denoting whether memory is used or not below. Besides, we reorganize Table 2 and add the results of \\\"**IPSeg w/o M**\\\" for comprehensive and fair comparison.\\n\\n\\n\\n| ADE20K | memory|Backbone | Architecture | 100-5 |||100-10 |||100-50 |||50-50 |||\\n| -------- | :-:|------- | -------- |-|-|-|-|-|-|-|-|-|-|-|-|\\n|||||0-100|101-150|all|0-100|101-150|all|0-100|101-150|all|0-50|51-150|all|\\n| ECLIPSE |\\u00d7 | Resnet-101 | Mask2former |43.3|16.3|34.2|43.4|17.4|34.6|45.0|21.7|37.1|-|-|-|\\n|LGKD+PLOP|\\u00d7|Resnet-101|Deeplab V3|-|-|-|42.1|22.0|**35.4**|43.6|25.7|**37.5**|49.4|29.4|**36.0**|\\n|SSUL|\\u00d7|Resnet-101|Deeplab V3|39.9|17.4|32.5|40.2|18.8|33.1|41.3|18.0|33.6|48.4|20.2|29.6|\\n|MicroSeg|\\u00d7|Resnet-101|Deeplab V3|40.4|20.5|33.8|41.5|21.6|34.9|40.2|18.8|33.1|48.6|24.8|32.9|\\n|PLOP+NeST|\\u00d7|Resnet-101|Deeplab V3| 39.3|17.4|32.0|40.9|22.0|34.7|42.4|24.3|36.3|48.7|27.7|34.8|\\n|IPSeg w/o M |\\u00d7|Resnet-101|Deeplab V3|41.0|22.4|**34.8**|41.0|23.6|35.3|41.3|24.0|35.5|46.7|26.2|33.1|\\n|\\n|IPSeg|\\u221a|Resnet-101|Deeplab V3|42.4|22.7|35.9|42.1|22.3|35.6|41.7|25.2|36.3|47.3|26.7|33.6|\\n\\n\\n|ADE20K |memory| Backbone | Architecture | 100-5 |||100-10 |||100-50 |||50-50 |||\\n| -------- | :-:| -------- | -------- |-|-|-|-|-|-|-|-|-|-|-|-|\\n|||||0-100|101-150|all|0-100|101-150|all|0-100|101-150|all|0-50|51-150|all|\\n|SSUL|\\u00d7|Swin-B|Deeplab V3|41.3|16.0|32.9|40.7|19.0|33.5|41.9|20.1|34.6|49.5|21.3|30.7|\\n|CoinSeg|\\u00d7|Swin-B|Deeplab V3|43.1|24.1|36.8|42.1|24.5|36.2|41.6|26.7|36.6|49.0|28.9|35.6|\\n|PLOP+NeST|\\u00d7|Swin-B|Deeplab V3|39.7|18.3|32.6|41.7|24.2|35.9|43.5|26.5|37.9|50.6|28.9|36.2|\\n|IPSeg w/o M|\\u00d7|Swin-B|Deeplab V3|43.1|26.2|**37.6**|42.5|27.8|**37.6**|43.2|29.0|**38.4**|49.3|33.0|**38.5**|\\n|\\n|IPSeg|\\u221a|Swin-B|Deeplab V3|43.2|30.4|38.9|43.0|30.9|39.0|43.8|31.5|39.7|51.1|34.8|40.3|\\n\\nFrom the results, we can draw two key conclusions:\\n\\n1. **Comparable Performance Without Memory Buffer**: Even without a memory buffer, our data-free version demonstrates close performance to our data-replay version, exhibiting only a minor performance loss (up to 1.1% when using ResNet-101 on ADE20K). Our method maintains robust and consistent performance across all settings, whether using memory buffer or not.\\n\\n2. **Competitive Performance in Long-Term Incremental Scenarios**: The data-free version of IPseg (IPseg w/o M) achieves competitive performance with both ResNet-101 and Swin-B backbones, especially in challenging long-term incremental scenarios (e.g., ADE 100-5, ADE 100-10).\"}", "{\"title\": \"Response to Reviewer FU4w\", \"comment\": \"Thank you for your positive feedback and insightful comment! In regard to the weaknesses and questions raised, we provide our point-to-point responses below:\\n\\n---\\n\\n**Q1:** Question about the decoupling implementation and detailed comparisons with existing works.\\n\\n**A1:** \\nPrevious works, such as SSUL, use pseudo-labels and saliency maps to decouple background regions and mitigate the background shift challenge. However, they do not fully address the challenge of semantic shift due to incomplete pseudo labeling and decoupling. \\n\\n1. Based on the observation in L233 (L234 in the revised version), an image can be divided into four distinct parts: the region of past classes $\\\\mathcal{C}_{1:t-1}$, target classes $\\\\mathcal{C}\\\\_{t}$, unknown foreground $c'_u$ and pure background $c'_b$. This strategy is adopted by both previous works [1-3] and ours.\\n2. SSUL identifies unknown classes $c'_u$ from the background and employs extra parameters for prediction, which is optimized together with other target class heads. It inevitably introduces noise and affects the learning of target classes, as discussed in L148-154 in our paper.\\n3. **Essentially, we fundamentally analyze the characteristics of noisy semantics and applies a decoupled, divide-and-conquer strategy to learn the noisy semantics**. Specifically, we introduce two separate branches to decouple the learning of these classes:\\n- **Temporary branch**: This branch learns the target classes $\\\\mathcal{C}_t$ and non-target foregrounds $c_f$, which change dynamically across each incremental step. \\n- **Permanent branch**: This branch learns the pure background $c'_b$ and unseen foreground $c'_u$, which persist throughout the incremental learning process.\\n\\nWe also provide a visualization of semantic decoupling in Figure 7.\\n\\n---\\n\\n**Q2:** Details of memory buffer with image-level samples.\\n\\n**A2:** We use a shared memory buffer for both the image posterior branch and the segmentation branch. Below, we provide a detailed explanation of how this memory buffer is constructed, updated, and how it storages samples\\uff1a\\n\\n1. **Memory Construction and Update**: Given the memory size $\\\\mathcal{M}$ and the number of already seen classes $|\\\\mathcal{C}\\\\_{1:t}|$, the memory buffer is constructed before step 1 with $\\\\mathcal{M} // |\\\\mathcal{C}_{1:t}|$ samples per class. Once initialized, it is updated before the start of each new step, as done in previous works [1-3]. The update of our memory buffer follows a class-balanced sampling, which is explained in detail in Section 3.5.\\n2. **Data Storage**: For raw data, IPSeg directly stores the image paths in a JSON file, as done in previous works [1-3]. For image-level labels, IPSeg stores the class labels of the images as arrays in the same JSON file with multi-hot encoding, where $1$ indicates the presence of a class and $0$ indicates absence. The memory cost for this is negligible. For pixel-level labels, instead of storing full-class annotations (with data type of *uint8* ) as prior approaches, IPSeg only stores the salient mask, where the background and foreground are labeled as $0$ and $1$, respectively (with data type of *bool* ). Theoretically, the storage space could be reduced to $1/8$. \\n\\n\\n\\n---\\n\\n**Q3:** The figure presentations are inconsistent. These should be presented by pdf, etc.\\n\\n**A3:** Thanks for your valuable feedback and careful check regarding the quality of our figure presentations. We replace all blur figures you point out with newer high-resolution PDF version in the the revised manuscript. \\n\\n---\\n\\n**Q4:** Moreover, the authors shall further make some comparisons in terms of training efficacy with the previous method, since they use a memory buff and constitute a two-branches architecture.\\n\\n**A4:** Please refer to our official comments, we provide a detailed and comprehensive analysis on training and inference cost, thank you.\\n\\n---\\n\\n[1] SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning, NeurIPS 2021.\\n\\n[2] Mining Unseen Classes via Regional Objectness: A Simple Baseline for Incremental Segmentation, NeurIPS 2022.\\n\\n[3] CoinSeg: Contrast Inter-and Intra-Class Representations for Incremental Segmentation, ICCV 2023.\"}", "{\"title\": \"We greatly appreciate your constructive suggestions for improving our work\", \"comment\": \"Thank you for your positive feedback on our work and for increasing the score. We greatly appreciate your constructive suggestions for improving our work.\\n\\nBest wishes.\"}", "{\"title\": \"Response to Reviewer wQeR (2/2)\", \"comment\": \"**Q1:** I don't see the rationale why it is called the permanent branch. Generally, only consistent class definitions across all steps can be regarded as permanent semantics.\\n\\n**A1:** In our work, the terms \\\"permanent\\\" and \\\"temporary\\\" do not refer to specific classes but depend on their learning cycle throughout the incremental stages. For the concepts existing across all incremental steps, we use the term \\\"permanent\\\" to describe them and the \\\"permanent branch\\\" to indicate the branch consistently learning them across the whole incremental steps.\", \"there_are_three_components_needed_to_be_identified_once_a_new_incremental_step_begins\": \"target classes set $\\\\mathcal{C}\\\\_{t}$, the pure background $c_b'$, and the unknown set $c_u'$. Across all incremental steps, $\\\\mathcal{C}_{t}$ changes drastically, $c_b'$ keeps fixed and $c_u'$ shrinks but will not disappear. Compared with the ever changing target class set $\\\\mathcal{C}_t$, the pure background $c_b'$ and the unknown set $c_u'$ exist across all incremental steps.\\n\\nFurther, based on your example, what happens in our IPSeg is as follows: \\\"cat\\\" is included in the unknown set in step 1 to be learned by the permenant branch. In step 2, \\\"cat\\\" is removed from the unknown set and transformed into target class set. After step 2, \\\"cat\\\" is only regarded as a seen class. This transformation process happens on all target classes across all incremental steps. \\n\\n\\n---\\n\\n**Q2:** How are the final prediction maps generated from the permanent and temporary branches?\\n\\n**A2:** As shown by the green lines in Figure 2, the permenant branch $\\\\phi_p$ outputs prediction for the background $c_b'$, and the temporary branch $\\\\phi_i$ (i=1,2,...,t) outputs predictions for the target classes $\\\\mathcal{C}\\\\_t$. And the pixel-level prediction $\\\\phi_{0:T}(h_\\\\theta(x_i))$ can be writen as:\\n$$ \\\\phi_{0:T}(h_\\\\theta(x_i))= Con( \\\\phi_{p,bg}(h_\\\\theta(x_i)) , \\\\phi_{1:T,target}(h_\\\\theta(x_i)) ).$$ Where $Con$ is the concatenate operation, $\\\\phi_{p,bg}(h_\\\\theta(x_i))$ and $\\\\phi_{1:T,target}(h_\\\\theta(x_i))$ represent the background prediction from the permenant branch and the target classes prediction from the temporary branch. And this pixel-level prediction is then producted by image posterior probability to form the final predition maps as Eq.3 and Eq.4.\\n\\nIn the revised manuscript, we provide a detailed description of the prediction map generation process.\\n\\n---\\n\\n**Q3:** The previous-class image labels are not available at the current step. How to supervise the image posterior in each step?\\n\\n**A3:** As mentioned in L204-207 (L206-210 in the revised version), image posterior branch uses mixed data from memory buffer and current training dataset of each step. The supervision is derived from the knowledge of all seen classes in the mixed data, which mainly comes from:\\n1. Samples from memory buffer are saved with their image-level labels of corresponding stages, contributing to knowledge of previous classes.\\n2. IPSeg uses image-level pseudo-label of current data to capture the knowledge of old classes. \\n3. The ground truth of current task data on target classes is available. \\n\\nTo investigate how these forms of supervision improve performance, we provide detailed ablation in Table 4.\\n\\n\\n---\\n\\n**Q4:** Does the temporary branches predictions contribute to the final prediction or just the target class scores are used to multiply by the corresponding confidence score from the image posterior branch?\\n\\n**A4:** \\nYes, there are two roles the temporary branch plays in contributing to the final prediction. First is the target class predictions $\\\\phi_{1:T,target}(h_\\\\theta(x_i))$, and it is used as our answer to Q2 above. Second is the prediction of other foreground regions $c_f$, and IPSeg utilizes $c_f$ from temporary branches of each step to filter out erroneous outputs during inference as shown in L265\\u2013271 (L277-285 in the revised version). While the pure background prediction within each temporary branch does not contibute to the final predictions, and it only helps the model distinguish the target classes during training. \\n\\n---\\n\\n[1] ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning, CVPR 2024.\\n\\n[2] SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning, NeurIPS 2021.\\n\\n[3] Mining Unseen Classes via Regional Objectness: A Simple Baseline for Incremental Segmentation, NeurIPS 2022.\\n\\n[4] CoinSeg: Contrast Inter-and Intra-Class Representations for Incremental Segmentation, ICCV 2023.\\n\\n[5] Label-guided knowledge distillation for continual semantic segmentation on 2d images and 3d point cloud, ICCV 2023.\"}", "{\"summary\": \"The paper proposes IPSeg, an innovative framework for addressing semantic drift in Class-Incremental Semantic Segmentation (CISS). IPSeg leverages two main strategies: (1) Image Posterior (IP) guidance to mitigate errors from independent task optimization, and (2) Permanent-Temporary Semantics Decoupling to handle noisy semantics. These mechanisms allow IPSeg to retain past knowledge while learning new classes, achieving significant improvements in segmentation performance on Pascal VOC and ADE20K benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This approach enhances pixel-level accuracy by leveraging global image-level predictions. Specifically, by introducing the image posterior guidance mechanism, IPSeg aims to mitigate the separate optimization issue in CISS.\\n2. The decoupling of stable and temporary semantic components is well-conceived and allows the model to handle both static background and dynamic foreground objects across incremental steps.\\n3. IPSeg outperforms various state-of-the-art methods across different incremental segmentation scenarios.\", \"weaknesses\": \"1. The issue of separate optimization has been investigated by [2]. In that paper, the scale inconsistency issue \\\"earlier incremental task heads may have larger output scales than the later heads, especially in similar classes\\\" was alleviated by logit manipulation. Therefore, the claim that \\\"separate optimization does not attract any attention\\\" is inaccurate.\\n2. The introduction of additional components like the image posterior branch and decoupled semantics increases model complexity. This may hinder its applicability for real-time or resource-constrained applications. Efficiency comparisons, such as computational complexity, iterations required for convergence, and finetuning time cost, should be conducted.\\n3. Salient object detector is used to find out the foreground regions, which will incur additional computational costs. Moreover, it brings additional information and is unfair to other competing methods that do not use it.\\n4. The memory buffer may raise scalability and privacy concerns, especially when scaling to larger datasets and storing sensitive user data.\\n5. This paper missed several SOTA methods [1-2] in experimental comparison.\\n\\n[1] Yang, Ze, et al. \\\"Label-guided knowledge distillation for continual semantic segmentation on 2d images and 3d point clouds.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[2] Kim, Beomyoung, Joonsang Yu, and Sung Ju Hwang. \\\"ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"questions\": \"1. The permanent branch aims to learn dummy labels representing \\u201cpure\\u201d background and unknown objects. However, the definition of unknown objects are ever-changing. For instance, \\\"cat\\\" can be an unknown class in step 1, and then a target class in step 2, and afterwards become a past seen class in step 3. I don't see the rationale why it is called the permanent branch. Generally, only consistent class definitions across all steps can be regarded as permanent semantics.\\n2. How are the final prediction maps generated from the permanent and temporary branches? The detail procedures should be elaborated.\\n3. The previous-class image labels are not available at the current step. How to supervise the image posterior in each step?\\n4. It seems like temporary branches predict pure background and other foreground regions in addition to the target classes at the current step. Does it contribute to the final prediction or just the target class scores are used to multiply by the corresponding confidence score from the image posterior branch?\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\", \"details_of_ethics_concerns\": \"This paper uses a memory buffer to store the training data seen before, which may cause privacy issues when it comes to some privacy-sensitive scenarios, for instance, medical data, personal identity data, etc.\"}", "{\"metareview\": \"The paper addresses key issues in the task of class-incremental semantic segmentation, focusing on two main issues i.e., semantic drift and separate optimization that hinder performance. The proposed IPSeg method introduces two primary innovations: (1) Image Posterior Guidance, which leverages image-level classification to rectify segmentation errors and mitigate separate optimization, and (2) Permanent-Temporary Semantics Decoupling, aimed at distinguishing stable background and dynamic instance semantics. Extensive experiments on Pascal VOC and ADE20K datasets demonstrate that IPSeg achieves notable performance gains over SOTA methods, especially in long-term incremental learning scenarios.\", \"strengths\": \"The paper clearly identifies and addresses the issues of semantic drift and separate optimization, which are critical for improving CISS. IPSeg\\u2019s use of image posterior probabilities to guide segmentation is reasonable and is supported by extensive ablation studies and performance benchmarks. The experimental results consistently show that IPSeg outperforms baseline and competing methods across different datasets and settings. The authors' thorough responses to reviewer questions demonstrate the method\\u2019s robustness, particularly in scenarios with memory buffers. While the abstract may be somewhat confusing, the overall writing is generally clear in showing the main ideas and implementation.\", \"weaknesses\": \"IPSeg introduces several modules to address the observed issues, however, their entanglement makes it difficult to isolate the contributions of each component, limiting the clarity of the paper\\u2019s novelty and potential to inspire future work. The overall approach, though effective, is not entirely novel. Experimental results indicate that IPSeg performs well with a memory buffer; however, its performance without memory (IPSeg w/o M) is less competitive, lagging behind methods like LGKD+PLOP in data-free settings (noted by Reviewer wQeR). This discrepancy raises doubts about the core hypothesis that image-level classification mitigates forgetting better than segmentation. Additionally, the computational cost of IPSeg is higher than that of some baseline methods. Reviewers also highlighted the complexity of certain sections (e.g., semantic decoupling), suggesting that these areas could benefit from simplification and clearer comparisons with existing methods tackling similar challenges.\", \"decision\": \"The submission received mixed ratings, with two positive and two negative reviews. Reviewer k5js provided positive feedback but their comments were general, thus I slightly down-weighted of their evaluation in the final decision-making. Although the paper presents strong experimental results across multiple datasets, significant concerns remain regarding the novelty of IPSeg\\u2019s design and the empirical validation of its core components. As a result, the paper in its current form may not be accepted this time.\", \"additional_comments_on_reviewer_discussion\": [\"During the rebuttal period, the reviewers raised concerns primarily from the following three perspectives:\", \"(1) the performance of IPSeg without a memory buffer,\", \"(2) the computational cost compared to baseline methods, and\", \"(3) clarity in the explanation of the permanent-temporary semantics decoupling mechanism.\", \"Reviewer zxHa questioned the effectiveness of image-level classification without memory buffers, suggesting that IPSeg's core hypothesis was not fully validated in such scenarios. In response, the authors provided ablation studies and new experimental results demonstrating that while IPSeg w/o M performs slightly below LGKD+PLOP, the primary contribution lies in the memory-buffer version, which achieves SOTA results. Reviewer wQeR expressed concerns about the computational cost, which the authors addressed by clarifying the trade-off between performance and inference time, where the increased complexity yields significant accuracy improvements. Additionally, FU4w requested clearer explanations of the decoupling strategy, which the authors addressed by revising relevant sections and providing updated visualizations. These responses were considered in the final decision, with the performance gap in memory-free scenarios remaining a key factor in the recommendation for rejection, despite the overall strength and innovation of the paper.\"]}", "{\"title\": \"Response to Reviewer wQeR (1/2)\", \"comment\": \"Thank you for your positive feedback and insightful comments. In regard to the weaknesses and questions raised, we provide our point-to-point responses below.\\n\\n---\\n\\n**W1:** The claim that \\\"separate optimization does not attract any attention\\\" is inaccurate.\\n\\n**A1:** We sincerely thank the reviewer for pointing out this oversight. We notice that ECLIPSE [1] also focuses on this challenge and proposes the logit manipulation method. We also revise the corresponding statement. Besides, though focusing on the same challenge, IPSeg differs ECLIPSE from its solution:\\n1. **ECLIPSE** is a prompt-based method built on the Mask2Former network, which encounters an error propagation problem after **freezing the old prompts**. To address this, ECLIPSE incorporates logit manipulation to leverage common knowledge across the classes. \\n2. **IPSeg** is an architecture-based approach built on DeepLabV3 network. We provide a detailed analysis of the issues arising from **freezing the old classification heads**. IPSeg introduces an image posterior branch to explicitly introduce informative image-level class knowledge into segmentation network and directly overcome separate optimization challenge.\\n\\n---\\n**W2:** Efficiency comparisons, such as computational complexity, iterations required for convergence, and finetuning time cost.\\n\\n**A2:** Please refer to our official comment, we provide a detailed and comprehensive analysis on training and inference cost, thanks.\\n\\n---\\n**W3:** Salient object detector will incur additional computational costs and brings additional information and is unfair to other competing methods that do not use it.\\n\\n**A3:** \\n1. **Computational costs**\\uff1aTo make saliency maps as auxiliary information, we use an off-the-shelf salient object detector to pre-compute the saliency maps for the entire training set. This process is performed only once, takes less than 10 minutes for Pascal VOC and 30 minutes for ADE20K, and has no impact on inference time. \\n2. **Fair comparison**: We notice SOTA methods [2-4] all introduce additional information to enhance the model's recognition capability. Following these methods, IPSeg utilizes the same saliency maps to ensure a fair comparison. \\n\\n---\\n**W4:** The memory buffer may raise scalability and privacy concerns, especially when scaling to larger datasets and storing sensitive user data.\\n\\n**A4:** We appreciate the reviewer's concern regarding memory buffers, scalability, and privacy. \\n\\nAs for the size of memory buffer and scalability, IPSeg sets $\\\\mathcal{M}=100$ for VOC and $\\\\mathcal{M}=300$ for ADE20K, strictly following the same setting as SSUL, MicroSeg and CoinSeg. \\n\\nIn scenarios with privacy constraints or limited data scalability, we also provide the **data-free version of IPSeg** (denoted as \\\"IPSeg w/o M\\\" in Table 1) as an alternative with minor performance loss. For privacy concerns, we discuss the limitations of using the memory buffer and its potential social impact in our \\\"Conclusions\\\" section. We agree with that the privacy issues must be handled with caution in artificial intelligence applications.\\n\\n---\\n**W5:** This paper missed several SOTA methods in experimental comparison.\\n\\n**A5:** We provide an experimental comparison on Pascal VOC 2012 and ADE20K below, where IPSeg still achieves competitive performance, particularly in long-term incremental tasks \\uff08ADE 100-5 and ADE 100-10\\uff09. \\n\\n| ADE | Backbone | Architecture | 100-5 |||100-10 |||100-50 |||50-50 |||\\n| - | - | - |-|-|-|-|-|-|-|-|-|-|-|-|\\n| ECLIPSE | Resnet-101 | Mask2former |43.3|16.3|34.2|43.4|17.4|34.6|45.0|21.7|37.1|-|-|-|\\n|LGKD+PLOP|Resnet-101|Deeplab V3|-|-|-|42.1|22.0|35.4|43.6|25.7|37.5|49.4|29.4|36.0|\\n|IPSeg|Resnet-101|Deeplab V3|42.4|22.7|35.9|42.1|22.3|35.6|41.7|25.2|36.3|47.3|32.7|33.6|\\n|IPSeg|Swin-B|Deeplab V3|43.2|30.4|38.9|43.0|30.9|39.0|43.8|31.5|39.7|51.1|34.8|40.3|\\n\\n|VOC|Backbone|Architecture|15-5|||15-1|||10-1|||2-2|||\\n|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|\\n| ECLIPSE | Resnet-101 | Mask2former|-|-|-|-|-|-|-|-|-|-|-|-|\\n|LGKD+PLOP|Resnet-101|Deeplab V3|75.2|54.8|71.1|69.3|30.9|61.1|-|-|-|-|-|-|\\n|IPSeg|Resnet-101|Deeplab V3|79.5|71.0|77.5|79.6|58.9|74.7|75.9|66.4|71.4|62.4|61.0|61.2|\\n|IPSeg|Swin-B|Deeplab V3|83.3|73.3|80.9|83.5|75.1|81.5|80.3|76.7|78.6|73.1|72.3|72.4|\\n\\nThese two methods do not hold comprehensive incremental experiments as IPSeg does. Though they have performance advances in their respective domains, differences in focusing task scenarios may limit their direct comparability with IPSeg in the context of our study. Upon this point, it is unfair for them to compare with ours. We also add them into the main results in Table 1 and Table 2 for comprehensive comparisons as your suggestions.\"}", "{\"title\": \"Further Response to wQeR (2/2)\", \"comment\": \"**Q1:** From my perspective, ECLIPSE does not suffer from error propagation (L157). Could the reviewer comment on this?\\n\\n**A1:**\\nThanks, we also agree that ECLIPSE address the error propogation problem by its proposed logit manipulation, as we wrote in our earlier response,\\n> ECLIPSE is a prompt-based method built on the Mask2Former network, which encounters an error propagation problem after freezing the old prompts. To address this, ECLIPSE incorporates logit manipulation to leverage common knowledge across the classes.\\n\\n\\nIn L157, we claimed that **ECLIPSE points out and researchs the error propagation problem** instead of suffering from error propagation. Based on your feedback, we will revise our description as below:\\n> Previous work points out the challenge that freezing parameters from the old stage can preserve the model's prior knowledge but will lead to error propagation and confusion between similar classes, and proposes logit manipulation to solve this challenge.\\n\\n\\nWe hope this newly refined version can clearly express our attitude and views without causing any confusion or ambiguity.\\n\\n---\\n\\n**Q2:** The subscript $\\\\phi_{1:t-1}(h_\\\\theta(x_i^{m,t}))$ used for $\\\\tilde{\\\\mathcal{Y}}$ is inconsistent to the subscript $i$ used for $\\\\mathcal{Y}^t$. I suppose $i$ to be the i-th input image. However, $\\\\phi_{1:t-1}(h_\\\\theta(x_i^{m,t}))$ is related to the class index predicted by previous heads.\\n\\n**A2:** \\nThanks for your timely reminder of inconsistent subscript.\\nThis notation is used in our equation.2 with its explanation in L211-213:\\n$$\\\\mathcal{L}\\\\_{\\\\text{ IP}} = \\\\mathcal{L}\\\\_{\\\\text{ BCE}}(\\\\hat{\\\\mathcal{Y}}^{t}\\\\_i, \\\\tilde{\\\\mathcal{Y}}^{t}\\\\_i) = \\\\mathcal{L}\\\\_{\\\\text{ BCE}}(\\\\psi(h_\\\\theta(x^{m,t}\\\\_i)), \\\\tilde{\\\\mathcal{Y}}^{t}\\\\_i), ~~\\\\tilde{\\\\mathcal{Y}}^{t}\\\\_i = \\\\mathcal{Y}^{t}\\\\_i \\\\cup \\\\tilde{\\\\mathcal{Y}}\\\\_{\\\\phi_{1:t-1}(h_\\\\theta(x^{m,t}\\\\_i))}$$\\n> ... and pseudo label $\\\\tilde{\\\\mathcal{Y}}\\\\_{\\\\phi_{1:t-1}(h_\\\\theta(x^{m,t}\\\\_i))}$ on past seen classes $\\\\mathcal{C}\\\\_{1:t-1}$. \\n\\nWe use $\\\\tilde{\\\\mathcal{Y}}\\\\_{\\\\phi_{1:t-1}(h_\\\\theta(x^{m,t}\\\\_i))}$ to represent the image-level pseudo labels on past classes that are derived from previous segmentation heads. \\n\\nWe agree that it is not consistent with already used subscript $i$ for image-index as you mentioned. Considering this inconsistency, we decide to revise this notation into $\\\\{\\\\phi_{1:t-1}(h_\\\\theta(x^{m,t}_i))\\\\}$ with using set operator $\\\\{\\\\cdot\\\\}$ on class channel instead. Correspondingly, we also revise the equation.2 and its explanation into the following form: \\n\\n$$\\\\mathcal{L}\\\\_{\\\\text{ IP}} = \\\\mathcal{L}\\\\_{\\\\text{ BCE}}(\\\\hat{\\\\mathcal{Y}}^{t}\\\\_i, \\\\tilde{\\\\mathcal{Y}}^{t}\\\\_i) = \\\\mathcal{L}\\\\_{\\\\text{ BCE}}(\\\\psi(h_\\\\theta(x^{m,t}\\\\_i)), \\\\tilde{\\\\mathcal{Y}}^{t}\\\\_i), ~~\\\\tilde{\\\\mathcal{Y}}^{t}\\\\_i = \\\\mathcal{Y}^{t}\\\\_i \\\\cup Y{(\\\\phi_{1:t-1}(h_\\\\theta(x^{m,t}\\\\_i)))\\\\}$$\\n> ... and pseudo label $Y(\\\\{\\\\phi_{1:t-1}(h_\\\\theta(x^{m,t}_i)))\\\\}$ on past seen classes $\\\\mathcal{C}\\\\_{1:t-1}$, $Y( \\\\cdot\\\\)$ is set operator on class channel.\\n\\nThanks again for your careful and rigorous check, this indeed further helps us to improve the quality of our manuscript.\"}", "{\"summary\": \"This paper addresses the challenges of class incremental semantic segmentation (CISS), where models learn from sequential tasks with changing background semantics, a phenomenon known as semantic drift. The authors identify two key issues\\u2014separate optimization and noisy semantics\\u2014that significantly hinder CISS performance. To tackle these issues, they propose Image Posterior and Semantics Decoupling for Segmentation (IPSeg), which employs two main mechanisms: (1) Image Posterior Guidance: IPSeg uses image-wise posterior probabilities to guide pixel-wise predictions, mitigating separate optimization issues. (2) Semantics Decoupling: Noisy semantics are split into two groups, stable, static semantics and dynamic, temporary semantics, each handled by separate branches with distinct life cycles. Experiments on the Pascal VOC 2012 and ADE20K datasets show that IPSeg outperforms existing approaches, particularly in challenging long-term scenarios, and demonstrates improved balance between learning plasticity and memory stability.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"## 1. Strong Motivation\\n\\nThe paper presents a well-defined and convincing motivation for addressing Separate Optimization, a challenge where earlier-trained task heads can produce disproportionately higher scores compared to later-trained heads for visually similar categories. This issue is thoroughly analyzed, with compelling visual support in Figures 1, 5, and 6.\\n\\n## 2. Comprehensive Analysis\\n\\nThe authors provide a detailed ablation of each proposed component, supported by robust quantitative and qualitative analyses. This thorough exploration effectively demonstrates the contribution of each component to the overall method\\u2019s performance.\\n\\n\\n## 3. State-of-the-Art Performance\\n\\nThe proposed method, IPSeg, demonstrates state-of-the-art performance on the VOC2012 and ADE20K datasets, with results that convincingly surpass existing approaches.\\n\\n## 4. Clear and Structured Presentation\\n\\nThe paper is well-written and carefully structured. The problem setting and motivation are clearly articulated, and each proposed component is thoroughly explained and ablated, enhancing the paper\\u2019s clarity and rigor.\", \"weaknesses\": \"Most of my concerns raised are addressed within the paper\\u2019s appendix, which provides comprehensive additional supportive analysis.\\n\\n## Recommentation\\nI conclude that this paper is solid and compelling, meeting the high standards expected for ICLR. My initial recommendation is to Accept. I will finalize the rating after a discussion with the authors and other reviewers.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"To all Reviewers\", \"comment\": \"We thank all reviewers for their great efforts and constructive comments, which help us further improve our work. This official comment contains two part: 1. The analysis on training and inference cost. 2. The summary of our manuscript modification.\\n\\n---\\n\\n## 1. Analysis on training and inference cost \\nHere, we provide a comprehensive analysis of the model parameters, training, and inference costs. We test and report the results of IPSeg, SSUL-M and CoinSeg-M with Swin-B on the VOC 15-1 setting. We set *image_size=512x512, epochs=50, and batch_size=16* in training and *image_size=512x512* for inference. All results are run on RTX 3090 GPU.\\n\\n1. **Model Parameters**: Using the thop tool, we analyze and compare the trainable parameters for these methods. The increase in parameter sizes is similar across them, with an average of $3.84M$ additional parameters per step. Additionaly, IPSeg has $29.72M$ parameters more than SSUL-M due to the additional image posterior branch.\\n\\n| Step | 0 | 1 | 2 | 3 | 4 | 5 |\\n| :-------- | :--------: | :--------: | :--: | :--: | :--: | :--: |\\n| IPSeg | 135.92 M | 139.76 M | 143.60 M | 147.66 M| 151.28 M| 155.12 M|\\n| SSUL-M | 106.20 M |110.03 M | 113.89 M|117.95 M | 121.56 M| 125.40 M |\\n| CoinSeg-M | 107.02 M | 111.15 M | 115.29 M | 119.42 M | 123.55 M | 127.68 M |\\n\\n\\n2. **Training cost**: The training time and GPU Memory usage of these methods are shown in the table below. Due to the introduced image posterior branch, IPSeg needs more training cost compared with SSUL-M but less than CoinSeg-M.\\n\\n| Method | Time | GPU usage |\\n| :-------- | :--------: | :--------:|\\n| IPSeg | 9h 14min | 21.1G |\\n| SSUL-M | 7h 13min | 19.4G |\\n| CoinSeg-M | >15h |21.3G |\\n\\n3. **Inference**: The inference speed (FPS), flops, and cost are shown in the table below. The inference speed of IPSeg ($27.3$ FPS) is slightly lower than SSUL-M ($33.7$ FPS) and similar to CoinSeg-M ($28.2$ FPS). Due to the proposed image posterior branch, the model's floating-point operations ($137.1$ GFLOPs) are higher than the baseline ($94.9$ GFLOPs), and with an approximately $1$ GB increase in GPU usage. \\n\\n| Method | FPS | FLOPs | GPU usage |\\n| :-------- | :--------: | :--: | :--: |\\n| IPSeg | 27.3 |137.1G | 6.2G|\\n| SSUL-M | 33.7 |94.9G | 5.3G|\\n| CoinSeg-M| 28.2 |96.3G | 5.6G |\\n\\nOverall, IPSeg introduces an additional image posterior branch with slight increases in model parameters, training and inference costs but brings great performance improvement. It is a worthwhile trade-off between performance and cost.\\n\\n---\\n\\n## 2. Manuscript Modification\\nTo provide clearer insight into the revisions we make to our paper and the experiments conducted in response to the reviewers' feedback, we summarize the changes during the rebuttal period as follows\\uff1a\\n\\n### Additional analyses:\\n* We conduct a quantitative evaluation of image poserior branch on different settings. The results show that image posterior branch has excellent resilience against catastrophic forgetting. (reviewer zxHa Q1)\\n* We provide a comprehensive analysis of the model parameters, training, and inference costs of IPSeg compared with previous works. The results show that IPSeg introduces slight increases in model parameters, training and inference costs but brings great performance improvement. (reviewer zxHa Q4 and reviewer wQeR W2 and reviewer FU4w Q4)\\n* We conduct ablation study on different salient map. The results show that the default saliency map struggles with identifying \\\"Stuff\\\" classes, and high-quality salient map can obtain better performance. (reviewer zxHa Q5)\\n\\n### Clarification:\\n* More details on the process to get the final prediction maps are provided in L269-L275. (reviewer wQeR Q2)\\n* More details on the construction of memory buffer are provided in the appendix. (reviewer FU4w Q2)\\n* Additional SOTA methods comparisons are added in Table 1 and Table 2. (reviewer wQeR W5)\\n* Table 2 is reorganized and the results of \\\"IPSeg w/o M\\\" are added for comprehensive and fair comparison. (reviewer wQeR W5, latest revision)\\n\\n### Correction: \\n* A typo is fixed in Equation 3. ($\\\\phi_{1:T}$ --> $\\\\phi_{0:T}$)\\n* The statement in L156 is revised by adding previous work. (reviewer wQeR W1)\\n* All figures are replaced with PDF versions for better presentation. (reviewer FU4w Q3)\"}", "{\"title\": \"Request for Data-Free IPSeg w/o M Results on ADE20k\", \"comment\": \"Please do not list the results **without denoting whether memory is used or not** in the same table in A5. It is not fair to do so. Additionally, I would like to request for **Data-Free IPSeg w/o M** results on ADE20k.\"}", "{\"title\": \"Response to Reviewer FU4w\", \"comment\": \"Dear Reviewer FU4w.\\n\\nAs the author-reviewer discussion period is nearing its end, and since other reviewers have actively engaged in discussions, we would greatly appreciate it if you could review our responses to your comments at your earliest convenience.\\n\\nThis will allow us to address any further questions or concerns you may have before the discussion period ends. If our responses satisfactorily address your concerns, please let us know. Thank you very much for your time and effort!\\n\\nSincerely,\\n\\nThe Authors of Submission #731\"}" ] }
C4q5R6XbJ6
Drawing the Line: Enhancing Trustworthiness of MLLMs Through the Power of Refusal
[ "Yuhao Wang", "Zhiyuan Zhu", "Heyang Liu", "Yusheng Liao", "Hongcheng Liu", "Yanfeng Wang", "Yu Wang" ]
Multimodal large language models (MLLMs) excel at multimodal perception and understanding, yet their tendency to generate hallucinated or inaccurate responses undermines their trustworthiness. Existing methods have largely overlooked the importance of refusal responses as a means of enhancing MLLMs reliability. To bridge this gap, we present the Information Boundary-aware Learning Framework (InBoL), a novel approach that empowers MLLMs to refuse to answer user queries when encountering insufficient information. To the best of our knowledge, InBoL is the first framework that systematically defines the conditions under which refusal is appropriate for MLLMs using the concept of information boundaries proposed in our paper. This framework introduces a comprehensive data generation pipeline and tailored training strategies to improve the model’s ability to deliver appropriate refusal responses. To evaluate the trustworthiness of MLLMs, we further propose a user-centric alignment goal along with corresponding metrics. Experimental results demonstrate a significant improvement in refusal accuracy without noticeably compromising the model’s helpfulness, establishing InBoL as a pivotal advancement in building more trustworthy MLLMs.
[ "Trustworthiness", "Alignment", "MLLMs" ]
https://openreview.net/pdf?id=C4q5R6XbJ6
https://openreview.net/forum?id=C4q5R6XbJ6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zkv7w6aghh", "wHhmb4ujdr", "t9f2xGew4C", "sYniD0CfCG", "r4h9A2W23W", "oxc5nY969i", "i0XZbNv9tG", "fA1t2slfc0", "dRXJ2LKarV", "YSVTEvh1dI", "XCTrEBjWRG", "GQcN2pcysL", "GIDMZlqqnU", "G4Zcc7Q8IX", "BjTwAVarPs", "9opcnzk7NQ", "69BLnQdk3a", "4m1gcL3KsK", "26PzIMzzYw", "1QJqIXj22K" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1733190855983, 1732535730909, 1732535493375, 1732170612091, 1732801407702, 1733190891244, 1733569472240, 1732287287011, 1732535713565, 1732196383913, 1732171788014, 1732287630510, 1732765096077, 1730597767181, 1729790070756, 1732199470395, 1732197961032, 1730640555339, 1732801392647, 1732195859907 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Reviewer_9ssd" ], [ "ICLR.cc/2025/Conference/Submission9986/Reviewer_f1e7" ], [ "ICLR.cc/2025/Conference/Submission9986/Reviewer_9ssd" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Reviewer_sbyo" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ], [ "ICLR.cc/2025/Conference/Submission9986/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your insightful comments and valuable suggestions on improving our work. As the final deadline approaches, we would kindly like to ask whether our responses have sufficiently addressed your questions and concerns. We are happy to engage in further discussion or provide any clarifications needed, and we welcome any additional feedback to strengthen our work before the rebuttal period ends.\\ufeff Please kindly let us know if you have additional questions or concerns that stand between us and a higher score! Thank you for your time and thoughtful reviews. \\ufeff\\n\\nBest regards, \\n\\nThe Authors\"}", "{\"title\": \"Welcome to Discuss\", \"comment\": \"Thank you for your insightful feedback and the time spent reviewing our paper. We recognize the importance of comprehensively addressing your concerns and are committed to resolving the issues. Should there be any aspects of our response that you find unclear or if further discussion is needed, please feel free to let us know. We are fully prepared to offer additional clarifications or engage in more detailed discussions to resolve the concerns.\\n\\nWe truly value your meticulous attention and thoughtful insights throughout this process. Your expertise and guidance are indispensable to us, and we eagerly await your further feedback.\"}", "{\"title\": \"Welcome to Discuss\", \"comment\": \"Thank you for your insightful feedback and the time spent reviewing our paper. We recognize the importance of comprehensively addressing your concerns and are committed to resolving the issues. Should there be any aspects of our response that you find unclear or if further discussion is needed, please feel free to let us know. We are fully prepared to offer additional clarifications or engage in more detailed discussions to resolve the concerns.\\n\\nWe truly value your meticulous attention and thoughtful insights throughout this process. Your expertise and guidance are indispensable to us, and we eagerly await your further feedback.\"}", "{\"title\": \"Author Response to Reviewer 9ssd(Part 1)\", \"comment\": \"Thank you for your comprehensive review and insightful comments.We responded in detail as follows:\\n> Q1: Could you elaborate on how you balance increasing the refusal rate with maintaining accuracy? It would be helpful to understand more about the trade-off mechanism between these two goals. Additionally, do you have any strategies in place to address potential user frustration if the model refuses too often?\", \"a1\": \"Thank you for your thoughtful question. Balancing the increase in the refusal rate with maintaining accuracy is a crucial aspect of our framework. To address this trade-off, we primarily focus on carefully adjusting the proportions of training data within and beyond the defined boundaries. Detailed experiments can be found in Appendix G.2 of the revised paper.\\n \\nWe also appreciate your point regarding potential user frustration if the model refuses too frequently. While this concern is valid, we believe the benefits of reducing misinformation outweigh the risks of occasional user dissatisfaction. Moreover, we agree that refusal responses accompanied by clear explanation could significantly improve user experience. By explaining why the model cannot respond\\u2014such as limitations in visual input or knowledge\\u2014users are more likely to perceive the refusal as a thoughtful decision rather than a limitation of the model.\\n \\nHowever, implementing this feature is outside the scope of our current work due to the significant increase in complexity it would introduce to both data generation and evaluation processes. Ensuring that these explanations are accurate, relevant, and user-friendly would require substantial additional work. Nevertheless, we see this as a promising direction for future research and plan to explore explanatory feedback mechanisms to make refusal responses more transparent and user-centered. We have added a discussion on this limitation in the Appendix B.\\n\\n> Q2: The approach of generating unanswerable questions by randomly swapping images and questions may not effectively simulate real-world unanswerable scenarios. It raise a concern about whether this method accurately reflects realistic situations where a model cannot provide a correct answer.\", \"a2\": \"The approach of randomly swapping images and questions is designed to simulate scenarios where the question is unrelated to the image. Such situations are common in real-world contexts, for instance, when a user accidentally uploads the wrong image or asks an unrelated question. Sometimes, users may also intentionally create such mismatches to test the model\\u2019s robustness. Furthermore, similar cases are observed in widely used crowd-sourced datasets like VQAV2, where mismatches between images and questions occur. Therefore, we believe that this method effectively captures certain real-world scenarios.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your insightful feedback, which has significantly contributed to improving our paper. This is a gentle reminder since we have only a few days until the discussion period ends. If you feel our response and revisions have addressed your concerns, we would be grateful for your continued strong support. Please let us know if you have any additional suggestions for improvement.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your insightful comments and valuable suggestions on improving our work. As the final deadline approaches, we would kindly like to ask whether our responses have sufficiently addressed your questions and concerns. We are happy to engage in further discussion or provide any clarifications needed, and we welcome any additional feedback to strengthen our work before the rebuttal period ends.\\ufeff Please kindly let us know if you have additional questions or concerns that stand between us and a higher score! Thank you for your time and thoughtful reviews. \\ufeff\\n\\nBest regards, \\n\\nThe Authors\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Author Response to Reviewer f1e7(Part 1)\", \"comment\": \"Thanks for your time and insightful reviews. We responded in detail as follows:\\n\\n> W1: While IDK-IT effectively reduces misinformation, it can limit model's helpfulness (contrary to what the authors claim). One can increase truthfulness by giving refusal responses along with some justification for why the model refused to answer. Authors have not addressed this aspect of refusal with reasoning in the paper.\", \"a\": \"Thank you for pointing this out. All CA-DPO results presented in Tables 1, 2, 3, and 4 are based on models that were initially trained with IDK-IT. This detail is discussed in Appendix D, and we apologize for not explicitly mentioning this in the main text.\"}", "{\"title\": \"Welcome to Discuss\", \"comment\": \"Thank you for your insightful feedback and the time spent reviewing our paper. We recognize the importance of comprehensively addressing your concerns and are committed to resolving the issues. Should there be any aspects of our response that you find unclear or if further discussion is needed, please feel free to let us know. We are fully prepared to offer additional clarifications or engage in more detailed discussions to resolve the concerns.\\n\\nWe truly value your meticulous attention and thoughtful insights throughout this process. Your expertise and guidance are indispensable to us, and we eagerly await your further feedback.\"}", "{\"title\": \"Author Response to Reviewer sbyo(Part 2)\", \"comment\": \"> (Q3 & W4) Boundary and Hallucination Exploration: How does InBoL\\u2019s boundary creation address hallucination causes and refusal failures? Defining boundaries by confidence thresholds doesn\\u2019t appear to directly address why hallucinations or refusal failures might occur. Would expanding boundary classifications or including visual boundary explanations aid in understanding these limitations?\", \"a3\": \"The information boundaries introduced in InBoL are designed to provide a guiding framework for systematically identifying and managing scenarios where the model lacks sufficient information. Based on these identified cases, we developed a data generation pipeline and training strategies (IDK-IT and CA-DPO) to train the model to refuse to answer when adequate information is unavailable, thereby reducing the occurrence of hallucinations. One of the primary causes of hallucinations in MLLMs is their inability to recognize when a query lies beyond their knowledge or perceptual capacity. The InBoL framework specifically addresses this challenge by training the model to appropriately refuse to answer in such situations, thereby directly address the cause of hallucinations.\\n\\nIt is important to highlight that boundary creation is not intended to eliminate refusal failures but rather serves as a guiding framework to help the model learn to refuse appropriately. In fact, refusal failures are inherently unavoidable. First, accurately assessing its own boundaries(confidence) is a challenge for models. Second, as demonstrated in Section 4.3.2, we observe that the model occasionally attempts to answer questions with low confidence, resulting in some refusal failures. However, this behavior ensures a balance between maintaining the model's overall performance and improving its refusal rates.\\n\\nIf we have misunderstood your question or if there is any aspect of our response that is unclear, please feel free to let us know.\\n\\n> (Q4 & W6) Dependence on GPT-4o: How does the dependence on GPT-4o for data generation impact generalizability? Relying on GPT-4o to generate unanswerable questions may risk inheriting its biases. Could you describe any mitigations for GPT-4o-induced biases in the dataset and discuss InBoL\\u2019s performance if an alternative model or method is used to create unanswerable data?\", \"a4\": \"Thank you for raising this important question. First, it is important to note that only a small number of the unanswerable data in our experiments was generated using GPT-4o (approximately 1k samples). As a result, the overall impact of GPT-4o-generated data on the model\\u2019s performance is limited.\\n\\nSecond, we carefully designed prompts for GPT-4o to ensure that the generated unanswerable questions were strongly related to the accompanying images. Specifically, GPT-4o was instructed to create unanswerable questions from 2 perspective as shown in Figure 8 in appendix. Additionally, we implemented a filtering mechanism to address potential hallucination issues from GPT-4o, ensuring that only high-quality unanswerable questions were included in the dataset.\\n\\nFinally, we conducted additional experiments using open-source MLLM(Qwen2-VL-72B-Instruct) to generate and filter data. The training results on this dataset were comparable to those achieved with GPT-4o-generated data, highlighting the generalizability of our pipeline. In the revised version of the paper, we have included a new section in Appendix G.3 to discuss this issue in greater depth, with updates highlighted in blue for clarity. More detailed results can also be found in this section.\\n\\n| Model for data gen | method | Vizwiz(ua) | VQAv2-IDK(filter) | BeyondVisQA |\\n|--------------------|--------|------------|-------------------|-------------|\\n| GPT-4o | IDK-IT | 76.01 | 81.42 | 75.25 |\\n| GPT-4o | CA-DPO | 69.97 | 70.63 | 67.75 |\\n| Qwen2-VL | IDK-IT | 74.49 | 79.25 | 72.50 |\\n| Qwen2-VL | CA-DPO | 71.39 | 75.22 | 69.50 |\"}", "{\"title\": \"Author Response to Reviewer 9ssd(Part 2)\", \"comment\": \"> Q3: The paper also falls short in providing examples of unanswerable questions with varying types and levels of difficulty, limiting the demonstration of question diversity. It is unclear whether different prompts or variations in prompt structure are used to encourage this diversity. Furthermore, the strategy for generating unanswerable questions might need to be dynamically adjusted based on the capabilities of different models, but this aspect is not explored. Do you think the strategy for generating unanswerable questions should be adjusted dynamically depending on the model's capabilities?\", \"a3\": \"Thank you for your insightful feedback. We acknowledge the omission of specific examples of unanswerable questions in the original manuscript. In our revised version, we have included various unanswerable questions in the Appendix D.2 to provide better clarity. Furthermore, Figure 8 in the appendix showcases the prompts used to generate unanswerable questions. These prompts encourage the generation of image-related but unanswerable questions by leveraging GPT-4o, ensuring the diversity in the generated questions.\\n\\nRegarding the generalizability and robustness of our data generation method, we conducted additional experiments using other advanced open-source MLLMs, such as Qwen2-VL 72B. The unanswerable questions generated by Qwen2-VL 72B yielded results comparable to those produced using GPT-4o, demonstrating the generalizability of our pipeline. Detailed results are provided in Appendix G.3 of the revision of our paper, and an excerpt of the evaluation is shown below:\\n\\n| Model for data gen | method | Vizwiz(ua) | VQAv2-IDK(filter) | BeyondVisQA |\\n|--------------------|--------|------------|-------------------|-------------|\\n| GPT-4o | IDK-IT | 76.01 | 81.42 | 75.25 |\\n| GPT-4o | CA-DPO | 69.97 | 70.63 | 67.75 |\\n| Qwen2-VL | IDK-IT | 74.49 | 79.25 | 72.50 |\\n| Qwen2-VL | CA-DPO | 71.39 | 75.22 | 69.50 |\\n\\nWe also agree with your suggestion that the strategy for generating unanswerable questions could be dynamically adapted based on the capabilities of the models. While our current approach leverages powerful models such as GPT-4o and Qwen2-VL-72B, we acknowledge that weaker or smaller models, such as LLaVA-Next 7B, may require more tailored and sophisticated prompts to generate high-quality unanswerable data. However, given the availability and reliability of existing advanced MLLMs, we have prioritized utilizing these resources for our pipeline, as they effectively minimize the need for such adjustments.\"}", "{\"title\": \"Author Response to Reviewer f1e7(Part 2)\", \"comment\": \"> Q2: While the paper builds up on intrinsic and extrinsic knowledge sources and that knowledge can come from model's parameters or from the visual content, there are no experiments to dissect this aspect of knowledge grounding. This analysis can significantly improve the quality of the paper (given the framing of the method for increasing trustworthiness)\", \"a\": \"As noted in line 61 of the paper, the majority of existing approaches for training models to refuse responses are limited to unimodal LLMs and cannot be directly applied to MLLMs. In addition, as for MLLMs, only a few works have explored constructing unanswerable questions to mitigate hallucinations. However, these methods either lack open-sourced model weights [1] or rely on earlier, underperforming MLLMs [2], which prevents a meaningful or fair comparison with our approach. Fundamentally, these methods focus only on introducing questions beyond the extrinsic boundary and employ supervised fine-tuning (SFT) to train the model to refuse. As stated in lines 404\\u2013407 of the paper, we also implemented this approach and presented the results in Table 1 and Table 2 under the \\\"SFT\\\" row. The findings clearly demonstrate that our methods, IDK-IT and CA-DPO, significantly outperform SFT, providing strong evidence of the effectiveness of our proposed framework.\\n\\n[1] Sungguk Cha, Jusung Lee, Younghyun Lee, and Cheoljong Yang. Visually dehallucinative instruction generation. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5510\\u20135514. IEEE, 2024.\\n\\n[2] Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Mitigating hallucination in large multi-modal models via robust instruction tuning. In The Twelfth International Conference on Learning Representations, 2023b.\\n\\n\\nWe hope this clarifies your concerns, and we are grateful for your suggestions.\"}", "{\"comment\": \"Thank you for your response. I am inclined to maintain a positive evaluation.\"}", "{\"summary\": \"This paper presents Information Boundary-aware Learning Framework (InBoL) to enhance the reliability of Multimodal Large Language Models (MLLMs) by systematically training them to recognize their knowledge and perception 'intrinsic' and 'extrinsic' boundaries and refuse to respond when they lack sufficient information. The InBoL framework includes a data generation pipeline to generate structured data (with refusal questions generated with gpt-4o prompting) for 'IDK' Instruction Tuning (IDK-IT) and Confidence-aware Direct Preference Optimization (CA-DPO) (built on top of https://github.com/opendatalab/HA-DPO) \\u2014 the dataset is designed to improve the model\\u2019s accuracy in refusal responses without sacrificing helpfulness to some extent. The paper adopts a user-centric evaluation approach, emphasizing human preference as the core metric for assessing trustworthiness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"the paper disentangles existing value alignment methods (as noted in section 2.1) to create a model-agnostic objective by using user preferences for DPO and generating structured datasets using confidence sampling and LLM judges for creating model fine-tuning recipes.\", \"the paper integrates refusal as an explicit mechanism for trustworthiness. This systematic refusal approach seems to be unique among MLLM alignment techniques.\", \"The paper\\u2019s combined approach for instruction tuning for cautious response handling and CA-DPO for adaptive preference optimization proves effective in experimental results, especially for out-of-domain (OOD) tasks.\"], \"weaknesses\": [\"While IDK-IT effectively reduces misinformation, it can limit model's helpfulness (contrary to what the authors claim). One can increase truthfulness by giving refusal responses along with some justification for why the model refused to answer. Authors have not addressed this aspect of refusal with reasoning in the paper.\", \"For section 3.1, authors give an example for extrinsic information boundary - similarly, it will help to give an example for intrinsic information boundary. Also, Figure 1 examples ii and iii, how can we use the protocol mentioned in the paper to respond appropriately to the questions. The demarkation of intrinsic and extrinsic responses is still confusing.\"], \"questions\": [\"In Table 3, it's not clear if the CA-DPO results also include the IDK-IT step.\", \"While the paper builds up on intrinsic and extrinsic knowledge sources and that knowledge can come from model's parameters or from the visual content, there are no experiments to dissect this aspect of knowledge grounding. This analysis can significantly improve the quality of the paper (given the framing of the method for increasing trustworthiness)\", \"There is no comparison with other methods mentioned in the paper for increasing trustworthiness of reducing hallucinations as well as response refusal.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors of this paper aim to address a key issue in multimodal large language models (MLLMs). While MLLMs are powerful and can handle various types of data, they often generate inaccurate or incorrect responses, which diminishes user trust. Many existing approaches overlook the importance of \\\"refusal to answer.\\\" In fact, enabling models to refuse answering when information is insufficient can make them more reliable. To address this gap, the authors propose a new framework called InBoL, which trains models to refuse to answer when uncertain. This framework also defines clear criteria for when a refusal is appropriate and introduces a comprehensive data generation process and training strategy to enhance the model's ability to refuse when necessary.\\n\\nThey also present evaluation methods for assessing the model's trustworthiness, focusing on user experience and relevant scoring metrics. Experimental results show that InBoL significantly improves refusal accuracy without compromising the model's usefulness, making it more trustworthy overall.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper introduces a new data construction pipeline that systematically classifies questions based on model dependence, generates unanswerable questions and organizes them in a standardized format for further training. The confidence estimation method improves the accuracy of the answer by combining string matching and LLM evaluation, ensuring a more robust assessment of model knowledge. The generation of unanswered questions allows the model to learn not to provide incorrect responses when there is insufficient information, but to choose to refuse to answer, thereby improving the credibility of the model and reducing the risk of misleading users. The Advanced training strategies improve model reliability by balancing accuracy and rejection rates, further reducing false information while improving performance. Experiments on different datasets prove the generality and reliability of the method, making this method very valuable for future multimodal research.\", \"weaknesses\": \"The paper lacks a thorough explanation of how to balance between increasing the refusal rate and maintaining accuracy. The trade-off mechanism between these two aspects is not sufficiently discussed. Additionally, while the current setup may not consider user frustration as a major concern, I\\u2019m still curious whether this could become an issue. Would it be beneficial for the model to offer users some form of feedback when refusing to answer, such as providing partial information or clarifying why it cannot respond? This might help improve user experience and prevent frustration by giving more context to the refusal, rather than leaving users without any explanation.\\n\\nThe approach of generating unanswerable questions by randomly swapping images and questions may not effectively simulate real-world unanswerable scenarios. It raise a concern about whether this method accurately reflects realistic situations where a model cannot provide a correct answer.\\n\\nThe paper also falls short in providing examples of unanswerable questions with varying types and levels of difficulty, limiting the demonstration of question diversity. It is unclear whether different prompts or variations in prompt structure are used to encourage this diversity. Furthermore, the strategy for generating unanswerable questions might need to be dynamically adjusted based on the capabilities of different models, but this aspect is not explored.\", \"questions\": \"Could you elaborate on how you balance increasing the refusal rate with maintaining accuracy? It would be helpful to understand more about the trade-off mechanism between these two goals. Additionally, do you have any strategies in place to address potential user frustration if the model refuses too often?\\n\\nDo you think the strategy for generating unanswerable questions should be adjusted dynamically depending on the model's capabilities?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer sbyo(Part 4)\", \"comment\": \"> (Q7 & W7)Missing Literature, Limitations and Ethics Review: Could you expand on the ethical implications of refusal mechanisms? The ethical considerations section is limited, mainly relegated to the appendix. Given the potential for user reliance on refusal responses, would a more thorough discussion on the ethical impacts of this feature strengthen the paper\\u2019s contextual relevance?\", \"a7\": \"Thank you for your thoughtful and constructive feedback.**\\n\\nWe greatly appreciate your suggestion to expand on the ethical implications of refusal mechanisms and to provide a more comprehensive discussion on the limitations of our work. Below, we address these points in detail: \\n\\n1. Ethical Considerations of Refusal Mechanisms: \\nWe believe that the refusal mechanism plays a crucial role in reducing users' dependency on the model, thereby minimizing the risk of inadvertently misleading users. As such, we regard this mechanism as ethically sound. However, we acknowledge that refusal mechanisms may have unintended consequences, such as causing user frustration or dissatisfaction. While the core concept remains ethically robust, we recognize the importance of further discussion on its limitations, particularly in the context of user experience.\\n\\n2. Limitations: \\nIn this work, we did not explore the generation of explanations for refusal responses, an important and underexamined area. From the model's perspective, many questions require reasoning processes to determine whether sufficient information is available to provide an accurate answer. By incorporating explanations for refusal responses, the model could better learn when to refuse appropriately, thereby enhancing its awareness of its own limitations and boundaries. From the user's perspective, unexplained refusals may lead to confusion or dissatisfaction. Providing clear and interpretable justifications for refusals could make the refusal mechanism more transparent and user-friendly, significantly improving the overall user experience.\\nFor future work, we plan to focus on enabling the model to generate well-reasoned and contextually appropriate refusal explanations. This will involve developing methodologies for constructing relevant datasets and designing robust evaluation frameworks to assess the quality and relevance of the generated explanations. By making refusal responses more informative and transparent, we aim to further enhance the trustworthiness of the model while ensuring a more positive and engaging user experience.\\nWe have added this limitation to the revised version of Appendix B for clarity. \\n\\n3. Cost of InBoL Implementation: \\nRegarding the computational cost of implementing InBoL, we emphasize that the framework does not require the use of the entire VQAv2, Oven, or SQA datasets. For IDK-IT and CA-DPO, we constructed approximately 11k and 24k samples, respectively. Generating datasets of this scale is computationally manageable and does not incur significant overhead.\\n\\nWe hope this clarifies your concerns, and we are grateful for your suggestions.\"}", "{\"title\": \"Author Response to Reviewer sbyo(Part 3)\", \"comment\": \"> (Q5 & W1)Limited Scop: What are your expectations for InBoL\\u2019s generalizability to larger models and domains beyond visual tasks? The experiments focus on smaller models (e.g., LLaVA 1.5) and visual LLMs. Providing experimental insights on InBoL\\u2019s scalability to more complex or varied models would clarify its broader applicability.\", \"a5\": \"Due to computational resource limitations, our experiments primarily focused on LLaVA 1.5 with 7B and 13B parameters. To further evaluate the scalability and generalization of InBoL, we have conducted additional experiments on a more advanced model, LLaVA-Next 7B. These results demonstrate that InBoL effectively enhances trustworthiness for LLaVA-Next 7B, indicating its potential applicability to larger and more advanced models.\\n\\n| Method | ID-Overall Acc | ID-Overall RefR | ID-Overall Score | AOKVQA Acc | AOKVQA RefR | AOKVQA Score | GQA Acc | GQA RefR | GQA Score | BeyondVisQA RefR | MMMU Acc | MMMU RefR | MMMU Score | MMBench Acc | MMBench RefR | MMBench Score |\\n|-----------------|----------------|------------------|------------------|------------|-------------|--------------|---------|----------|-----------|------------------|----------|-----------|------------|-------------|--------------|---------------|\\n| Original | 54.50 | 5.00 | 14.00 | 82.97 | 0.00 | 65.94 | 61.00 | 0.00 | 22.00 | 16.50 | 35.80 | 0.00 | -28.39 | 63.49 | 0.00 | 26.98 |\\n| Refusal Prompt | 51.90 | 9.40 | 13.20 | 70.41 | 16.38 | 57.20 | 60.47 | 2.12 | 23.06 | 48.00 | 31.78 | 10.44 | -26.00 | 61.25 | 0.34 | 22.84 |\\n| SFT | 53.10 | 12.30 | 18.50 | 75.81 | 5.85 | 57.47 | 61.16 | 1.21 | 23.53 | 76.25 | 35.78 | 1.00 | -27.44 | 64.95 | 0.17 | 30.07 |\\n| IDK-IT | 46.10 | 29.90 | 22.10 | 66.99 | 22.95 | 56.93 | 50.75 | 23.75 | 25.25 | **76.25** | 19.22 | 55.67 |**-5.89** | 58.42 | 16.75 | 33.59 |\\n| CSA-DPO | 52.60 | 29.50 | **34.70** | 76.55 | 14.31 | **67.41** | 60.50 | 11.75 | **32.75** | 71.75 | 23.22 | 45.11 | -8.44 | 60.40 | 18.64 | **39.43** |\\n\\n\\n> (Q6 & W2)Lack of Explanatory Rejection: Would explanatory feedback enhance trust in refusal responses? InBoL\\u2019s refusal responses currently lack explanations, potentially limiting user trust. Have you considered integrating explanatory refusals to improve user understanding, particularly when refusals stem from visual limitations or knowledge boundaries?\", \"a6\": \"Thank you for your insightful question. We agree that providing explanatory feedback in refusal responses has the potential to significantly enhance user trust. By offering clear explanations for why a refusal occurs\\u2014such as limitations in visual input or knowledge boundaries\\u2014the model could improve not only its self-awareness but also the user experience by fostering a greater understanding of its reasoning.\\n\\nWe did consider incorporating explanatory refusals during the development of our framework. However, we found that doing so would substantially increase the complexity of both the data generation and evaluation processes. This additional complexity would require extensive work to ensure that the explanations are accurate, relevant, and user-friendly. As such, we chose to only focus on basic refusal responses in this study. We recognize this as a limitation of our current work and have included a discussion on it in Appendix B of the revised paper. Exploring explanatory feedback is a promising direction for future research, and we plan to address this in subsequent studies.\"}", "{\"summary\": \"This paper presents the Information Boundary-aware Learning Framework (InBoL) for enhancing the trustworthiness of multimodal large language models (MLLMs). MLLMs often produce hallucinated or inaccurate responses, especially when faced with ambiguous or unfamiliar inputs. InBoL addresses this by training models to recognize \\u201cinformation boundaries\\u201d\\u2014distinguishing between questions they can answer confidently and those they should refuse. The framework leverages two novel training techniques: \\u201cI Don\\u2019t Know\\u201d (IDK) Instruction Tuning (IDK-IT) and Confidence-aware Direct Preference Optimization (CA-DPO), both aimed at improving refusal responses for uncertain or ambiguous queries.\\n\\nTo evaluate trustworthiness, the authors introduce a user-centered metric that rewards accurate and helpful responses while penalizing misinformation. Experimental results indicate that InBoL improves refusal accuracy without compromising the helpfulness of responses, setting a new benchmark for trustworthiness training in MLLMs. This work proposes a robust approach to model alignment for safe and reliable AI responses, particularly in vision-language tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper leverages a decrease in accuracy robustness of some VLLM\\u2019s responses to create a dataset that establishes clear boundaries for confident responses, incorporating refusal mechanisms for cases where the model might otherwise respond incorrectly. Through training, combined with experimentation, the study shoes the reduction of unreliable responses through the use of synthetic data in training.\", \"weaknesses\": [\"Limited Scope: Experiments focus narrowly on Visual LLMs (with claim for MLLMs) and small models (e.g., LLaVA 1.5), lacking insights into broader MLLM applicability or resource requirements.\", \"Lack of Explanatory Rejection: The model is trained to learn outright refusals lack explanations. Adding explanations or visualizations of boundaries could enhance user trust.\", \"Conceptual and Metric Issues: The construction of the \\\"unknown\\\" dataset used for training lacks clear justification for how it genuinely promotes trustworthiness in LLMs. Furthermore, the trustworthiness score proposed in the paper is unnormalized, which can be different across different dataset with different theoretical max accuracy rate. Answered accuracy might be a more objective measure. There\\u2019s also a lack of motivation behind the metrics, authors say the old metric is bad because \\u201cunknown\\u201d questions domain is hard to know, but then create such domain in Fig 3)\", \"Boundary and Hallucination Exploration: The simplistic boundary approach doesn\\u2019t adequately address the cause of hallucination, or accounting for decrease accuracy of responses. This investigation of cause of not refusal is lacking.\", \"Methodology Novelty: Core techniques (PEFT, DPO) are standard, limiting innovation, and newer methods like inference-time corrections could improve boundary handling.\", \"Dependence on GPT-4o: Dependence on GPT-4o for data generation risks biases from its potential hallucinations. Limited discussion of pipeline generalizability weakens applicability.\", \"Missing Literature, Limitations and Ethics Review: A more complete literature review and ethical discussion are needed (not in the appendix), currently relegated to the appendix, to frame the work\\u2019s limitations and ethical impact fully. Also the cost of InBol is lacking, which can be heavy to generate such dataset with model responses of so many benchmark in practice.\"], \"questions\": [\"How does InBoL innovate beyond existing methods like PEFT and DPO? While InBoL uses \\u201cI Don\\u2019t Know\\u201d (IDK) Instruction Tuning and Confidence-aware Direct Preference Optimization (CA-DPO), which build on established techniques, could you clarify any additional features that make InBoL uniquely suited to MLLMs? Consider highlighting or incorporating advanced techniques, such as inference-time adjustments, to distinguish InBoL further.\", \"Why did you choose an unnormalized trustworthiness score over a normalized or cross-comparable metric? The score is dataset-dependent, which could affect its generalizability across datasets. Could you discuss why this metric was prioritized over an alternative like answered accuracy? Introducing a normalized metric or justifying this choice could clarify how trustworthiness is assessed across various MLLMs and test sets.\", \"How does InBoL\\u2019s boundary creation address hallucination causes and refusal failures? Defining boundaries by confidence thresholds doesn\\u2019t appear to directly address why hallucinations or refusal failures might occur. Would expanding boundary classifications or including visual boundary explanations aid in understanding these limitations?\", \"How does the dependence on GPT-4o for data generation impact generalizability? Relying on GPT-4o to generate unanswerable questions may risk inheriting its biases. Could you describe any mitigations for GPT-4o-induced biases in the dataset and discuss InBoL\\u2019s performance if an alternative model or method is used to create unanswerable data?\", \"What are your expectations for InBoL\\u2019s generalizability to larger models and domains beyond visual tasks? The experiments focus on smaller models (e.g., LLaVA 1.5) and visual LLMs. Providing experimental insights on InBoL\\u2019s scalability to more complex or varied models would clarify its broader applicability.\", \"Would explanatory feedback enhance trust in refusal responses? InBoL\\u2019s refusal responses currently lack explanations, potentially limiting user trust. Have you considered integrating explanatory refusals to improve user understanding, particularly when refusals stem from visual limitations or knowledge boundaries?\", \"Could you expand on the ethical implications of refusal mechanisms? The ethical considerations section is limited, mainly relegated to the appendix. Given the potential for user reliance on refusal responses, would a more thorough discussion on the ethical impacts of this feature strengthen the paper\\u2019s contextual relevance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your insightful feedback, which has significantly contributed to improving our paper. This is a gentle reminder since we have only a few days until the discussion period ends. If you feel our response and revisions have addressed your concerns, we would be grateful for your continued strong support. Please let us know if you have any additional suggestions for improvement.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Author Response to Reviewer sbyo(Part 1)\", \"comment\": \"Thank you for your review and valuable feedback on our paper. We responded in detail as follows:\\n\\n> (Q1 & W5) Methodology Novelty: How does InBoL innovate beyond existing methods like PEFT and DPO? While InBoL uses \\u201cI Don\\u2019t Know\\u201d (IDK) Instruction Tuning and Confidence-aware Direct Preference Optimization (CA-DPO), which build on established techniques, could you clarify any additional features that make InBoL uniquely suited to MLLMs? Consider highlighting or incorporating advanced techniques, such as inference-time adjustments, to distinguish InBoL further.\", \"a1\": \"Thank you for your comments. The InBoL framework is distinct in its explicit incorporation of refusal as a mechanism to enhance trustworthiness, training MLLMs to recognize and respond appropriately when they lack sufficient information, which is unique among MLLM alignment techniques. To achieve this, InBoL introduces the information boundaries (Section 3.1) and includes a comprehensive data construction pipeline (Section 3.2) specifically designed to generate high-quality training data for refusal responses. Therefore, the core innovations of our framework include not only IDK-IT and CA-DPO but also the introduction of information boundary and a detailed construction pipeline of multimodal instruction with refusal response.\\nWhile we recognize that inference-time adjustments could potentially enhance boundary handling further, our primary focus is on training-based methods to instill boundary awareness in MLLMs, leaving inference-time optimizations as a promising direction for future exploration.\\n\\n> (Q2 & W3)Conceptual and Metric Issues: Why did you choose an unnormalized trustworthiness score over a normalized or cross-comparable metric? The score is dataset-dependent, which could affect its generalizability across datasets. Could you discuss why this metric was prioritized over an alternative like answered accuracy? Introducing a normalized metric or justifying this choice could clarify how trustworthiness is assessed across various MLLMs and test sets.\", \"a2\": \"The motivation behind our proposed evaluation method stems from limitations observed in prior approaches, which often require constructing \\\"unknown\\\" test set for each model. This construction process incurs high computational costs due to the need for multiple sampling. For model training, these sampling costs are manageable, allowing us to collect \\\"unknown\\\" questions as described in Figure 3; however, for evaluation purposes, such a costly approach would limit practicality across models.\\n\\nThe choice of metrics is fundamentally tied to the definition of trustworthiness. In this work, we adopt a user-centric perspective, proposing that a trustworthy model should maximize helpful responses while minimizing misinformation. Accordingly, we define a user-centered value function and further propose the trustworthiness score that reflects a balanced view of trustworthiness by taking both accuracy and refusal rate into account.\\n\\nWhile \\\"Answered Accuracy\\\" might be considered as an alternative, it has a significant limitation: it tends to encourage overly conservative behavior at the expense of overall accuracy.If Answered Accuracy were used as the primary metric, the optimal strategy for a model would be to refuse all questions where its confidence is less than 1. While this could lead to an ideal Answered Accuracy close to 100%, it would severely compromise the model\\u2019s helpfulness by refusing many questions where it could have provided useful and accurate responses.\\n\\nIn contrast, our trustworthiness score strikes a better balance. By explicitly assigning a score of 1 to correct responses and 0 to refusal responses, the metric incentivizes models to maintain high accuracy while leveraging refusal as a safeguard against misinformation. This balanced approach encourages models to provide as many correct responses as possible while prudently refusing only when necessary, thereby aligning with our expectations of trustworthiness in MLLMs.\\n\\nIf we have misunderstood your question or if there is any aspect of our response that is unclear, please feel free to let us know.\"}" ] }
C4H45A9cZa
Hierarchical Multiscale Diffuser for Extendable Long-Horizon Planning
[ "Chang Chen", "Hany Hamed", "Doojin Baek", "Yoshua Bengio", "Sungjin Ahn" ]
This paper introduces the Hierarchical Multiscale Diffuser (HM-Diffuser), a novel approach for efficient long-horizon planning. Building on recent advances in diffusion-based planning, our method addresses the challenge of planning over horizons significantly longer than those available in the training data. We decompose the problem into two key subproblems. The first phase, Progressive Trajectory Extension (PTE), involves stitching short trajectories together to create datasets with progressively longer trajectories. In the second phase, we train the HM-Diffuser on these extended datasets, preserving computational efficiency while enhancing long-horizon planning capabilities. The hierarchical structure of the HM-Diffuser allows for subgoal generation at multiple temporal resolutions, enabling a top-down planning approach that aligns high-level, long-term goals with low-level, short-term actions. Experimental results demonstrate that the combined PTE and HM-Diffuser approach effectively generates long-horizon plans, extending far beyond the originally provided trajectories.
[ "Long-Horizon Planning", "Diffusion", "Hierarchical", "Multiscale" ]
https://openreview.net/pdf?id=C4H45A9cZa
https://openreview.net/forum?id=C4H45A9cZa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "XXLEM2kPUg", "Qitz5dYMDv", "Cm02JNxB5v", "BVeTlaWEgo", "0sJufbEK6b" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730914451112, 1730763539964, 1731883739976, 1732504448462, 1730931928660 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10313/Reviewer_cj6K" ], [ "ICLR.cc/2025/Conference/Submission10313/Reviewer_VVjy" ], [ "ICLR.cc/2025/Conference/Submission10313/Reviewer_npsz" ], [ "ICLR.cc/2025/Conference/Submission10313/Authors" ], [ "ICLR.cc/2025/Conference/Submission10313/Reviewer_QCtk" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces the Hierarchical Multiscale Diffuser (HM-Diffuser), a new framework for extending long-horizon planning in reinforcement learning by integrating Progressive Trajectory Extension (PTE) with hierarchical diffusion models. To overcome the constraints of current diffusion-based planners, the authors propose PTE, which iteratively stitches short trajectories into longer sequences, enabling plans that surpass the original data\\u2019s horizon. Through experiments in Maze2D, Gym-MuJoCo, and high-dimensional manipulation tasks like FrankaKitchen, HM-Diffuser often performs better than existing models, such as Decision Diffuser (DD) and Hierarchical Diffuser (HD).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"**Originality**: The paper introduces an approach for long-horizon planning in reinforcement learning (RL) by combining hierarchical diffusion models with Progressive Trajectory Extension (PTE). The PTE mechanism, which stitches shorter trajectories to create longer ones, is an innovative solution to data scarcity in extended planning, expanding RL capabilities in complex, long-term tasks.\\n\\n**Quality**: The methodology is shown to work, and includes experiments across multiple benchmarks (Maze2D, Gym-MuJoCo, FrankaKitchen) to demonstrate the model\\u2019s adaptability to diverse environments, though a more rigorous formulation could enhance the soundness. Comparisons to baselines, such as Decision Diffuser (DD) and Hierarchical Diffuser (HD), show HM-Diffuser\\u2019s advantages.\\n\\n**Clarity**: The structure of the paper is ok, each component is presented. Explanations of diffusion models, hierarchical planning, and the overall framework are clear, making the concepts accessible to those familiar with RL. Visual aids and pseudocode support the narrative, though additional clarifications could further enhance readability.\\n\\n**Significance**: This work addresses a critical challenge in RL\\u2014planning over horizons longer than the training data allows. The individual components seem to fit together well, however, it\\u2019s hard to justify the applicability of the combined framework in general, reducing its broader applicability.\", \"weaknesses\": \"**Overly Ambitious Claims**: Throughout the paper, the authors motivate that we need models that allow robots to plan over \\u201cweek- or month-long\\u201d horizons based on visual experiences, which is overly ambitious and detracts from credibility. Limiting claims to the demonstrated capabilities would improve reliability.\\n\\n**Ad-Hoc Components Without Rigorous Justification**: Key components, such as linear and exponential PTE, and APP, appear as ad hoc solutions without clear theoretical or empirical support. This engineered approach may limit generalizability, and a stronger theoretical basis or clarification of assumptions is needed.\\n\\n**Unclear Scoring and Metrics**: The scores in Tables 1, 2, and 3 lack clear explanation, making it difficult to interpret model performance. Providing definitions and explanations for these metrics would improve result clarity.\\n\\n**Omission of Training Time and Resource Comparisons**: No information on training times or resource demands is provided. Given the model\\u2019s complexity, details on computational efficiency would help assess feasibility for real-world use.\\n\\n**Lack of Compounding Error Analysis**: The paper claims to tackle one fundamental issue with prior methods, i.e., compounding errors, but does not provide a direct comparison with other models. An explicit analysis would strengthen the validation of this claim.\\n\\n**Missing Subgoal Visualizations**: Visuals of subgoals generated by the Hierarchical Multiscale Diffuser would clarify the multiscale planning process and show how subgoals contribute to task performance.\\n\\n**Undefined Contribution of Hierarchical Multiscale Diffuser**: The paper does not isolate the performance improvements introduced by the Hierarchical Multiscale Diffuser (HMD) over PTE alone. Quantifying these contributions would clarify the value of HMD.\\n\\n**No Ablation Study on Key Parameters**: An ablation study on jump lengths and counts is missing, which would help clarify their impact on model performance across tasks and aid in parameter tuning.\\n\\n**Reproducibility Limitations**: Although hyperparameters and references are provided, sharing the code (through an anonymized repository) would enhance reproducibility, especially given the model\\u2019s complexity.\\n\\n**Insufficient and Unreferenced Visuals**: Additional visuals and proper referencing, especially for Figures 1 and 2, would improve clarity on hierarchical planning and recursive processes.\\n\\n**Fixed Segment Sizes Limit Generalizability**: The use of fixed segment sizes in trajectory stitching may limit adaptability to complex tasks like Franka Kitchen, where variable segment sizes may be necessary.\\n\\n**Lack of Comparison to Existing Stitching Methods**: Existing stitching methods are briefly mentioned as limited (line 177) without specification, making it hard to assess PTE\\u2019s novelty. A clear comparison would better contextualize the contributions of PTE.\", \"questions\": \"Most of my comments on the \\u2018weaknesses\\u2019 section can be treated as questions. Here are some other points:\\n\\n**Trajectory Information**: Do the trajectories contain only positional information (2D or 3D), or do they also include other state variables, such as velocity or acceleration? How does this impact the model\\u2019s generalizability across different environments?\\n\\n**Outstretching and Bridge Trajectory Sampling**: Euclidean distance is used for outstretching and bridge trajectory sampling. Given that this may not capture feasible paths in complex environments, e.g., for constrained robotic (manipulation) tasks, how is the distance threshold determined, and do you adjust it for different environments? What mechanisms, if any, are in place to account for constraints, or to ensure that the stitched trajectory remains feasible?\\n\\n**Trajectory Feasibility Checks**: Are there any feasibility checks for the complete trajectory generated by PTE? If a stitched trajectory is infeasible, what measures are in place to detect and handle this?\\n\\n**Hierarchical Diffuser and Position-Based Goals**: It appears the Hierarchical Diffuser primarily generates subgoals based on position. Could it be adapted to environments where position alone may not represent effective goals, or where additional context (e.g., velocity or object interactions) is needed?\\n\\n**Realism and Diversity of Generated Trajectories**: The example maze trajectories in Figure 3 appear repetitive. Does it play a role in the algorithm performance?\\n\\n**Application to High-Dimensional Visual Data**: In the discussion of future work, you mention visual observations. How do you envision adapting distance-based metrics like Euclidean distance in pixel-based environments, where state representation is more complex?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a hierarchical multi-scale diffusion planner that extends diffusion planning to longer trajectories than they were trained on via a stitching operation. Favorable results are demonstrated on a 2D maze and modified versions of the D4RL/FrankaKitchen benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is mostly well-written\", \"The work addresses a relevant limitation in diffusion planning\"], \"weaknesses\": [\"The results are a difficult to assess as the choice of benchmarks either seem rather trivial (2D maze) or use modified versions of standard benchmarks for which there are no suitable baselines (yet at least). Diffusion policies have previously produced SOTA results on e.g., certain manipulation tasks, results on 2D mazes or your own variants of extended benchmarks against fixed-length diffusion planning is not entirely convincing.\"], \"questions\": \"Shouldn't the extra flexibility in trajectory length of your method be useful in some existing benchmark with more established baselines?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents the Hierarchical Multiscale Diffuser (HM-Diffuser), an approach for long-horizon planning, leveraging diffusion-based planning to handle planning tasks with timelines longer than those found in training data. The authors propose to break down the planning process into two main stages. First, they introduce Progressive Trajectory Extension (PTE), a method for combining shorter trajectories into longer datasets. Then, they train the HM-Diffuser on these extended datasets, maintaining computational efficiency while enhancing long-term planning. HM-Diffuser\\u2019s hierarchical design enables it to generate subgoals at various temporal scales, facilitating a top-down approach that bridges high-level objectives and immediate actions. Experiments show that this combined approach successfully produces plans that exceed the initial trajectory lengths, demonstrating effective long-horizon planning capabilities.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Paper is well written, clear and well-structured. The motivation and the idea of the paper is clear. The approach is clearly stated and all relevant background is introduced well. The method explanation is accompanied by appropriate visualisation figures that help to understand.\", \"The proposed method seems reasonable and well-thought-of, with clear description and theoretical backing.\", \"The proposed method achieves good performance on target experiments.\", \"The approach is tested in several simulation experiments.\"], \"weaknesses\": [\"The choice of subgoals seems a bit arbitrary and there should be more analysis and ablation studies in these. It is not clear fully how this coise for the segment length and extensions work in different problems. E.g. in XXL Maze they seem to work well. It is not clear for the other domains.\", \"The name for section 5.3.3. is not informative.\", \"PD controller used here is not explained\", \"The improvements on Franka Kitchen Task in table 3 seem to be marginal.\"], \"questions\": [\"Can you provide more intution on choosing subgoals? In which environments should it work better?\", \"What ensures the feasibility of trajectories?\", \"Can you provide more information about PD controller?\", \"In Figure 4 the x axis as it is seems a bit unsual and hard to interpret. Perhaps you could use stacked barplot?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thank you for your valuable feedback and the time you dedicated to reviewing our paper. We greatly appreciate your constructive comments and suggestions. After careful consideration, we have decided to withdraw this submission and revise it further before resubmitting.\"}", "{\"summary\": \"The paper presents an approach for efficient long-horizon planning that addresses the challenge of planning over horizons longer than those encountered during training. The present their Hierarchical Multiscale Diffuser (HM-Diffuser) framework, that consists of two main steps, the Progressive Trajectory Extension (PTE) and the hierarchical multiscale planner (HMD). PTE stitches together short trajectories to generate longer ones, while HMD trains on these extended datasets to improve its long-horizon planning capabilities. The paper further introduces several improvements to HMD, including Adaptive Plan Pondering and a recursive version of HMD, which uses a single model to handle multiple temporal scales. The authors present experiments on a set of planning tasks that demonstrate how HMD can generate long-horizon plans that extend beyond the original examples trajectories and that out perform decision diffuser (DD) and hierarchical diffuser (HD) approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"In terms of originality, both the HM-diffuser and the PTE methods are novel. In addition, the extension of the benchmarks with the long-horizon versions can be useful for the community.\\nThe authors present the problem clearly and support their claims with experimental results and evaluations in a set of (extended) benchmarks, though the evaluation of the results can benefit from a clearer presentation (see below). \\nThe paper is well-organized and the related work is well structured. The paper is easy to follow and the authors provide a good level of detail for the implementation in the paper and in the appendix.\\nThe work addresses a significant challenge and the evaluation against existing and extended benchmarks adds to the potential impact of the proposed approach.\", \"weaknesses\": \"While the results for the long-horizon planning for the 2d-maze are well presented and illustrated, the evaluation on the offline rl side can be better presented. It is not clear what longer paths mean for the gym-mujoco examples and for the kicthen task scenarios. What do the performance percentages represent in Table 2? Similarly, what the numbers in table 3 represent in terms of performance in the kitchen task is not clear. A more detailed explanation of the evaluation process an help with the clarity in the results section.\", \"questions\": \"Typo in line 431, necessarity -> necessity\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
C45YqeBDUM
The KoLMogorov Test: Compression by Code Generation
[ "Ori Yoran", "Kunhao Zheng", "Fabian Gloeckle", "Jonas Gehring", "Gabriel Synnaeve", "Taco Cohen" ]
Compression is at the heart of intelligence. A theoretically optimal way to compress any sequence of data is to find the shortest program that outputs that sequence and then halts. However, such Kolmogorov compression is uncomputable, and code generating LLMs struggle to approximate this theoretical ideal, as it requires reasoning, planning and search capabilities beyond those of current models. In this work, we introduce the *KoLMogorov-Test* (KT), a compression-as-intelligence intelligence test for code generation LLMs. In KT a model is presented with a sequence of data at inference time, and asked to generate the shortest program that produces the sequence. We identify several benefits of KT for both evaluation and training: an essentially infinite number of problem instances of varying difficulty is readily available, strong baselines already exist, the evaluation metric (compression) cannot be gamed, and pretraining data contamination is highly unlikely. To evaluate current models, we use audio, text, and DNA data, as well as sequences produced by random synthetic programs. Current flagship models perform poorly - both GPT4-o and Llama-3.1-405B struggle on our natural and synthetic sequences. On our synthetic distribution, we are able to train code generation models with lower compression rates than previous approaches. Moreover, we show that gains on synthetic data generalize poorly to real data, suggesting that new innovations are necessary for additional gains on KT.
[ "Code generation", "code", "compression", "LLM", "dataset", "benchmark" ]
Accept (Poster)
https://openreview.net/pdf?id=C45YqeBDUM
https://openreview.net/forum?id=C45YqeBDUM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y9ItxCwFld", "wySgpElRuR", "vcrpUIlLus", "v13V1ykS1W", "tc2ABs2bv6", "smO6cbSltR", "s94bzarKKM", "rGpMpnyeYZ", "krZUW44ETw", "in0nbv1nws", "gj8oc3VyqX", "gRWvxOPNUI", "cnqXcFDXf5", "Z1NsidJvKh", "UvcQDJZRrm", "SsMOiMf0aR", "SdEdDCD9d3", "Q2FgcdhuqS", "NPXduJJx5Q", "NOEFtbKCea", "N5fKFvfrq0", "MzEtCXkBT9", "LdP6JfgdAP", "JS3M9Rna5Q", "8Et5tcIDcH", "7BmuVybApk", "15sJj7dzXg", "09hxtl7BOT" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1732879145982, 1730343305550, 1731953049618, 1731952613876, 1731952758381, 1732555433559, 1730855591247, 1731950569464, 1730901398377, 1737524002242, 1734898244809, 1732879553074, 1732645912340, 1731951449266, 1731952340164, 1732831484880, 1731949166633, 1731950322370, 1732879798171, 1731949581770, 1730733932902, 1731951086808, 1732514004294, 1732879329617, 1731031417314, 1731041382201, 1732855967941, 1731071167285 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_1MaY" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_uxqq" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_y77n" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_CMwz" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9732/Area_Chair_uBqy" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_CMwz" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_y77n" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_ohAG" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Authors" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_uxqq" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_xD3h" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_xD3h" ], [ "ICLR.cc/2025/Conference/Submission9732/Reviewer_5eaV" ] ], "structured_content_str": [ "{\"comment\": \"Dear reviewer,\\n\\nWe are happy to hear our response was satisfactory. Thank you again for your helpful feedback and thoughtful review.\"}", "{\"summary\": \"This paper proposes a novel aspect to evaluate the LLM: by testing its ability to compress discrete sequence data. The proposed benchmark features several advantages such as unlimited data availability and easy to control difficulty in synthetic data generation. The proposed benchmark can be viewed as a sub-task of code-generation. This paper further trained a few models to outperform current models in this task to explore LLM's edge in data compression.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The studied problem is interesting and challenging for LLMs, and not seriously studied before.\", \"The proposed benchmark is described in detail, making it easy to understand and reproduce.\", \"The efforts to train a few LLMs proved that this problem is addressable, which offers more context and helps future researches.\"], \"weaknesses\": [\"Better to provide a few examples on where the LLM's ability starts to drop: for tasks than that threshold, LLMs are able to produce perfect compression programs (and separately for each studied and fine tuned LLMs); for tasks harder than that, LLMs fails even with fine-tuning.\", \"Better study on the edge of LLM: if more training data are provided, if higher rank LoRA applied, or if better curriculum learning applied, will it be stronger, or stops at current level?\", \"I believe the LLM could perform better if more domain-specific presets of data are assembled, such as DNS sub-sequences with known patterns and known functionalities, yet this seems to be beyond the scope of this paper.\"], \"questions\": \"See sections above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer 1MaY\", \"comment\": \"> Better to provide a few examples on where the LLM's ability starts to drop: for tasks than that threshold, LLMs are able to produce perfect compression programs (and separately for each studied and fine tuned LLMs); for tasks harder than that, LLMs fails even with fine-tuning.\\n\\nThank you for this suggestion.We present an analysis of accuracy as a function of sequence and program lengths in \\u00a7B.3, showing that longer programs and sequences are more challenging for SeqCoder. We will add a thorough discussion regarding accuracy for the different operations in future versions. In preliminary experiments we found that accuracy varies between operators, as also been shown in previous works [1, 2]. This provides further evidence that better CodeLMs are needed for improved performance on KT.\\n\\n[1] What Makes Math Word Problems Challenging for LLMs?\\n\\n[2] Can We Count on LLMs? The Fixed-Effect Fallacy and Claims of GPT-4 Capabilities\\n\\n> Better study on the edge of LLM: if more training data are provided, if higher rank LoRA applied, or if better curriculum learning applied, will it be stronger, or stops at current level?\\n\\nThank you for this question. Regarding more training data, we present a learning curve in \\u00a7B.2, Tab.5. More training data is helpful with diminishing returns, as expected. Additionally, larger models perform better. We agree that curriculum learning for KT can be an exciting direction for future work, especially by modeling KT as an RL problem, and added a discussion about curriculum-learning in our Limitations Section.\\n\\n> I believe the LLM could perform better if more domain-specific presets of data are assembled, such as DNS sub-sequences with known patterns and known functionalities, yet this seems to be beyond the scope of this paper.\\n\\nThank you for this interesting suggestion. We agree that assembling data from multiple domains is an exciting direction for future work, and added a discussion in our Limitations Section in the updated paper.\\n\\nWe thank the reviewer again for their helpful review and constructive feedback and are happy to address any additional concerns during the discussion period.\"}", "{\"title\": \"Response to reviewer y77n (2/2)\", \"comment\": \"> Are the Llama models instruction tuned or pre-trained?\\n\\nWe use the instruction-tuned Llama-3.1 models. We clarified this in \\u00a7B.1 in the updated version. \\n\\n> What\\u2019s the compression rate for the training data? How many bits can it be compressed to?\\n\\nAs we discuss in the Introduction, the Kolmogorov complexity (optimal compression) for our natural sequences is uncomputable. Hence, we include strong Gzip and LMiC baselines, and an Oracle beeline for our synthetic data. We will be happy to consider any additional baselines the reviewer thinks necessary.\\n\\n> The proposed method seems to be very much model dependent, even on the size of the model? Do you add the size of the model while using your test, so that you can normalize your method across models? Otherwise larger models can very well compress all short programs effectively, thus it would then be a test of how large the LLM is.\\n\\nThe size of the model is indeed an important factor. For our LMiC baseline, the model is required during decoding time and we discard the model weights for our calculations as we discuss in Footnote 7 (results for LMiC including model weights were reported in [1]). For the CodeLMs, the weights are only required for encoding, and the compressed encoding does not necessitate the weights for decoding. Nevertheless, larger models require more compute which we do not report in our experiments (as we discuss in our Limitations section), and will be happy to add if the reviewer thinks necessary.\\n\\n[1] Language Modeling Is Compression\\n\\n> In table 1 what could be the reason for the Llama models to have a low accuracy for 8-bit prediction, while having almost double accuracy for 16-bit case?\\n\\nHigh-quality Audio-16-bit data is more challenging than lower-quality Audio-8-bit to traditional compression methods like Gzip (see Tab.4 in \\u00a7B.2). A possible reason for the higher Precision for the Llama models on Audio-16-bit is that the higher complexity leads to more repetitions of the input sequences, which in-turn leads to higher Precision (see \\u00a75.3 for a discussion about repetitions of input data).\\n\\n> A lot of the analysis seems to be heavily focused on Audio datasets/tasks, is there any particular reason for that? Instead of using existing code based benchmarks?\\n\\nWe use Audio-MFCC for several analyses because it is a challenging modality (see Tab.4 in \\u00a7B.2), where the MFCC encoding represents the main features of the sound (see \\u00a73.2 and \\u00a7A.1 for more information regarding MFCC). For the effect of input length analysis (Tab.2) we use Audio-8-bit because it is a relatively easy modality for Gzip, hence we expect trends to appear earlier on. Our qualitative analysis in \\u00a75 is composed of a uniform sample from all modalities.\\n\\n> Section 1 of this paper by Vitanyi (https://arxiv.org/pdf/0809.2754) has some interesting results on the Kolmogorov Complexity of different objects like, very simple objects, complex objects, random objects etc. I think some inspiration could be used from there to construct interesting examples for evaluating the LLMs, for instances showing if the existing SOTA LLMs can encode simple objects, and then progressively build on complex objects.\\n\\nThank you for referencing this paper, we agree it is highly relevant to our work, and that exploring varying complexities is an interesting research question. Our work currently explores different difficulties by focusing on six different modalities and various sequence lengths. We agree that a curriculum-learning can be an interesting suggestion for training future models and further discuss this in the Limitations Section in the updated version.\\n\\nWe thank the reviewer again for their helpful review and for appreciating the soundness, presentation, and contribution of our work. As we addressed all points raised by the reviewer in the updated version, and will include experiments with a CoT baseline in future versions, we are hopeful the reviewer will consider raising their score. We are happy to address any concerns the reviewer has during the discussion period.\"}", "{\"title\": \"Response to reviewer ohAG\", \"comment\": \"> My main concern is that the results do not generalize at all from synthetic to real datasets (0 accuracy).\\n\\nThank you for raising this concern. Generalization from synthetic to real distributions is a significant challenge which we discuss in our Limitations Section. We are hopeful our work can inspire future research focused on tilting the synthetic distribution towards the real one (e.g., by filtering easy-to-detect synthetic sequences), and better learning from the abundant amounts of real data (e.g., by modeling KT as an RL task).\\n\\n> In addition to synthetic data, the data used in this work includes natural text, audio, and human DNA sequences from the GRCh38 assembly represented in FASTA format. In protein structure prediction (PSP), sequences of proteins are commonly represented in FASTA format, and their 3D structure predicted. It would be interesting to see the compression results for protein sequences and their structures\\n\\nThank you for this suggestion. In addition to our synthetic distribution, our work covers five different modalities. We will be happy to add additional modalities the reviewer thinks necessary, including protein sequences in future versions.\\n\\n> The synthetic DSL includes 3 initiators x 7 modifies x 2 filters x 3 merger functions. It would be interesting to see this action space extended and understand how sensitive the results are as a function of the action space.\\n\\nThank you for this suggestion. We will be happy to add additional operators the reviewer thinks necessary. However, we note that this is not trivial, and will require retraining our SeqCoder models. Please see our response to Q1 for more details.\\n\\n> How sensitive are the results as a function of the action space?\\n\\nThank you for this interesting question. Some operators are harder for the models to learn than others. We will add a thorough discussion regarding accuracy for the different operations in future versions. In preliminary experiments, we experimented with a simplified DSL that includes only the Concatenate and Interleave operations, which was simpler for our models. To summarize, increasing the action space, especially by introducing \\u201charder\\u201d operators, results in sequences that are more challenging to models. We will also open-source our data generation framework to allow easy experimentation for future works.\\n\\nWe thank the reviewer again for their constructive feedback, and are happy the reviewer finds KT to be an \\u201cuseful additional scalable benchmark for LLMs\\u201d and our paper \\u201cwell-written\\u201d. We will also be happy to add additional modalities and operators to our experiments, and will add a discussion regarding accuracy for the different operators in future versions. Nevertheless, as we already experiment with six modalities and an extended DSL, we argue it is unlikely to affect the main findings of our work. \\n\\nAs we addressed all points raised by the reviewer, we are hopeful the reviewer will consider raising their score, and are happy to address any additional concerns the reviewer has during the discussion period.\"}", "{\"title\": \"Reply to the authors.\", \"comment\": \"Thanks to the authors for the clarifications. These are satisfactory.\"}", "{\"summary\": \"The papers proposes a method to evaluate code based language models by using them a compression mechanism to approximately compute the Kolmorogov Complexity (which is uncomputable). The aim of the evaluation procedure is to generate the shortest program which produces a certain data point and then stops. The paper proposes simple experiments to show how to approximately compute the Kolmogorov complexity of datums from LLMs. Experiments in the paper use the state of art closed and open source language models, and show that even the SOTA models struggle on the Kolmogorov test proposed in the paper.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed test is an interesting information theoretic test which can be used to compute the information in data samples relative to a model. If samples have smaller programs, then we know that the LLM has understood the structure of that problem/sequence.\", \"Table 1 is an interesting figure as it shows that even SoTA models currently struggle on at least some benchmarks like Kolmogorov-Test proposed in the paper.\", \"The experiments in the paper are intuitive, seem to be easy to follow and replicate.\", \"Test proposed in the paper is different from the usual benchmarks and can be added to any existing code evaluation pipeline by simply computing the size of the program which generates the desired output from the test cases.\"], \"weaknesses\": [\"The connection between Kolmogorov Complexity and Intelligence is not clear in the submission. If we take the popular code based benchmarks, and compute the size of the programs lets say at pass @ 10, then can we say that models which produce shorter programs are more intelligent?\", \"When using LLMs on the application side, users are often interested in easy to use/understand programs, will producing shorter programs be useful in this case?\", \"For the synthetic programs that are constructed as part of the dataset, is there a short proof for why they are the shortest length? Are you using some of the properties of the operations to assure that?\", \"Here is an example where the training set may not contain the shortest program, for instance consider the following program:\", \"seq_1 = range(0,10,2)\", \"seq_2 = range(1,11,2)\", \"seq_3 = interleave(seq_1,seq_2)\", \"output=seq_3\", \"In this case the shorted program is range(0,11,1) and not the program in the training set.\", \"Generating the shortest program for any given sequence is NP-Complete so what does line 246 correspond to? Are we trusting the LLM to generate the shortest programs? The baseline seems to be unreliable.\", \"In terms of the paper writing, it would certainly help if you define the Kolmogorov test or present it as an algorithm. While it\\u2019s understandable for people with background, it could be hard to understand for general audience.\", \"For the Lams in Table 1 especially the Llama models, why was few shot prompt / chain of thoughts reasoning not considered as good baselines? For the synthetic tasks, I believe using CoT with few-shot examples could certainly improve performance. It also seems that the SeqCoder-1.5B model is overfitting to the synthetic task instead of actually being intelligent. Would the claim here be that the 1.5B model is more intelligent than GPT-4o since it has a higher accuracy?\"], \"questions\": [\"Are the Llama models instruction tuned or pre-trained?\", \"What\\u2019s the compression rate for the training data? How many bits can it be compressed to?\", \"The proposed method seems to be very much model dependent, even on the size of the model? Do you add the size of the model while using your test, so that you can normalize your method across models? Otherwise larger models can very well compress all short programs effectively, thus it would then be a test of how large the LLM is.\", \"In table 1 what could be the reason for the Llama models to have a low accuracy for 8-bit prediction, while having almost double accuracy for 16-bit case?\", \"A lot of the analysis seems to be heavily focused on Audio datasets/tasks, is there any particular reason for that? Instead of using existing code based benchmarks?\", \"Section 1 of this paper by Vitanyi (https://arxiv.org/pdf/0809.2754) has some interesting results on the Kolmogorov Complexity of different objects like, very simple objects, complex objects, random objects etc. I think some inspiration could be used from there to construct interesting examples for evaluating the LLMs, for instances showing if the existing SOTA LLMs can encode simple objects, and then progressively build on complex objects.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer xD3h (2/2)\", \"comment\": \"> The authors didn\\u2019t report the effect of execution and feedback on the SOTA LLMs. From Table 3, the authors have just presented the ablation on the trained synthetic model.\\n\\nWe experiment with execution feedback only for our SeqCoder models because (a) the models are trained to generate *compositional* programs where each line is executable on its own, and (b) we can easily *train* the models with execution feedback. We agree that experimenting with execution feedback with SOTA CodeLMs is an exciting direction, however it is not trivial to implement reliably and has recently shown to be challenging without additional training [1, 2]. Hence, we believe that methods that provide additional supervision for SOTA CodeLMs is an exciting direction for future research.\\n\\n[1] What Makes Large Language Models Reason in (Multi-Turn) Code Generation?\\n\\n[2] NExT: Teaching Large Language Models to Reason about Code Execution\\n\\n\\n> The authors didn\\u2019t use models like openai-o1 which can reason before emitting any code and have shown to have good performance on code based tasks. It would be interesting how these inference based reasoning models perform on this task ?\\n\\nThank you for this suggestion. OpenAI-o1 was released on 12/9/24, less than a month before the submission deadline on 1/10/24. In addition, it is significantly more expensive than GPT4-o. Nevertheless, we agree with the reviewer that experimenting with o1 can be an interesting addition, and will be happy to add these experiments in future versions of the paper.\\n\\n> The authors haven\\u2019t reported the accuracy and precision of the Seqcoder 8B 1M model on synthetic data. Does starting from a better base model help in accuracy?\\n\\nThank you for this question. We added results with SecCoder-8B to Tab.1. Starting from a stronger, better trained model increases accuracy over our synthetic sequences, but does not lead to better generalization to real sequences.\\n\\nWe thank the reviewer again for their helpful and constructive feedback. We will add experiments with CoT and OpenAI-o1 to future versions of the paper. As we clarify above and was also mentioned by the reviewer, KT is a challenging benchmark that can both inspire new research and effectively evaluate CodeLMs. As we addressed all comments raised by the reviewer, we are hopeful the reviewer will consider raising their score, and are happy to address any additional concerns the reviewer has during the discussion period.\"}", "{\"summary\": \"The authors introduce the KOLMOGOROV-TEST (KT), a novel \\\"compression-as-intelligence\\\" test for code generation in large language models (LLMs). Using KT we can prompt or train LLMs to generate the shortest possible programs that reproduce given data sequences, emulating an ideal form of data compression known as Kolmogorov compression, which is theoretically optimal but incomputable priori. The contribution has several advantages: it provides a myriad supply of problem instances of varying difficulty, uses a robust metric (compression rate), and minimizes risks of pretraining data contamination. In evaluating current models like GPT-4o Omni and LLAMA-3.1-405B across multi-modal data types (audio, text, DNA, and synthetic sequences), the authors find that these models perform poorly on both natural and synthetic data. Also authors showed that while training on synthetic data shows improved compression rates, the gains do not transfer well to real-world data for KT.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Originality**:\\nThe authors present an innovative approach by introducing a \\\"compression-as-intelligence\\\" test (KOLMOGOROV-TEST, or KT) that challenges LLMs to generate the shortest programs across multiple data modalities, a novel and ambitious application of LLMs in program synthesis. The use of a multi-modal framework (including audio, text, DNA, and synthetic data) is particularly original, expanding the scope of LLMs in code generation.\\n\\n**Quality**: \\nThe authors collected program-sequence pairs for supervised training and evaluation, developing a compositional domain-specific language (DSL) and an automatic data generation framework. This framework enables consistent and reliable data generation while avoiding potential biases through uniform priors, which strengthens the robustness of the study. Additionally, the authors investigated the impact of execution feedback on performance, further enhancing the study's depth.\\n\\n**Clarity**: \\nThe paper includes a detailed analysis section, providing insights into model performance and limitations. The authors clearly document failure cases, particularly highlighting instances where models fail by attempting to reverse-engineer the data generation process rather than compress it effectively. This transparency improves the readability and interpretability of the results.\\n\\n**Significance**: \\nKT sets a new benchmark for evaluating LLMs on program synthesis via compression tasks, offering a reliable metric (compression rate) that is both challenging and robust. The framework holds potential for broader applications and improvements in LLM-based code generation by identifying key areas for enhancement, such as in handling complex, real-world data compression.\", \"weaknesses\": \"**Limited Focus on Code-Specific LLMs**:\\nAlthough the paper introduces a compression benchmark specifically for code-generating language models (CODE LMs), the experiments primarily use general-purpose LLMs. Testing specialized CODE LMs could offer a more accurate assessment of model performance for program synthesis tasks, as these models are often fine-tuned with code-specific vocabularies and architectures. Incorporating CODE LMs in future work would enhance the relevance and impact of the findings.\\n\\n**Understudied Tokenization Impact**: \\nThe impact of tokenization on model performance is underexplored. Prior work (e.g., Del\\u00e9tang et al.) suggests that larger vocabularies may offer more expressive flexibility but can also complicate next-token prediction. Evaluating different tokenization strategies would shed light on their influence on compression rates and model accuracy, potentially guiding optimizations in vocabulary selection to balance expressiveness and predictive ease.\\n\\n**Suggestions to test Length Generalization**: \\nScaling to longer sequences remains an open question. Applying techniques like Rotary Position Embeddings (ROPE) for scaling might provide insights into the model's capacity for length generalization. Additional experiments with longer sequences could clarify how well the model generalizes across variable input sizes, which is critical for practical applications of program synthesis.\\n\\n**Limited Prompt Optimization**: \\nThe paper could benefit from more work on prompt optimization, as prompt quality can significantly impact model output. Experimenting with prompt tuning strategies or context-enhancing techniques could improve model performance and compression efficiency. This would help achieve the paper\\u2019s goal of generating shorter, more precise programs, as optimized prompts can guide the model more effectively toward compact solutions.\", \"questions\": \"1. **Live Benchmark Availability**:\\n Are there any plans to release KT as a live benchmark in addition to the dataset? A live benchmark could provide ongoing insights into model performance and facilitate community-driven improvements over time.\\n\\n2. **Use of Code-Specific LLMs**: \\n Why weren\\u2019t code-specific LLMs, such as CodeLLAMA, DeepSeek Coder, or StarCoder, tested in this work? These models have vocabularies optimized for code, which may affect compression rates and generalization performance. Testing code-specific LLMs could provide better priors for transfer learning and potentially enhance performance on program generation tasks.\\n\\n3. **Training Details**: \\n How many epochs or steps were the models trained on for this task? Understanding the training duration would clarify whether models had sufficient exposure to the data to learn effective compression strategies.\\n\\n4. **Sequence Length and Token Count**: \\n The input sequence lengths vary from 16 to 1024 bytes, but could you clarify the number of tokens these represent? This information would help assess how sequence length impacts model performance and how the model tokenizes different sequence lengths.\\n\\n5. **Data Type Configuration for LLAMA-3.1-405B**: \\n Which data type was used for LLAMA-3.1-405B (e.g., bf16, fp16, fp8)? Were any specific data types recommended by the model\\u2019s authors tested to ensure compatibility with the architecture? Correct data type configurations can be critical for performance, and confirming this would add clarity to the experimental setup.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"This paper proposes a benchmark of testing LLM's \\\"compression-as-intelligence\\\" ability in code generation. The simple idea is to require models to generate the shortest possible python program that reproduces an input data sequence, aligning with the concept of Kolmogorov complexity. Experiments in the paper use both state-of-the-art closed (GPT-4-o) and open-source (Llama-3.1-405b) language models, using data from text, audio, and DNA, as well as synthetic data with specific patterns, to show that even the SOTA models struggle with the Kolmogorov test. The authors also develop a Domain Specific Language (DSL) to create program-sequence pairs for supervised training, leading to trained models that achieve lower compression rates than traditional methods on synthetic data but perform poorly on real data. In the end, the proposed test and experiments sheds light on the necessity of further innovations to improve the general LLM abilities.\\n\\nMost reviewers (6 out of 7) rated the work positively. Weighted by the discussion quality, I tend to side with the collective opinion of all the reviewers and propose to accept the work.\", \"additional_comments_on_reviewer_discussion\": \"This paper had a usual number of 7 reviewers due to lateness of some assigned reviewers and an overcompensation of added emergency reviewers. I first want to acknowledge the outstanding job from the authors of addressing this amount of reviews and maintaining discussions.\\n\\nDuring the discussion period, the authors were able to convince 3 reviewers to raise their scores, and had 2 others acknowledge positively to their changes. \\n\\n2 reviewers who rated marginally (5 and 6) never responded during the discussion period, despite multiple pings from AC.\"}", "{\"comment\": \"Dear reviewer,\\n\\nThank you for acknowledging our response and for your valuable feedback.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thanks to the authors for the clarifications. I have updated my score to reflect the change.\"}", "{\"title\": \"Response to reviewer CMwz\", \"comment\": \"> Limited Focus on Code-Specific LLMs\\n\\nThank you for this suggestion. In the paper, we experiment with two model families (Llama-3.1 and GPT4-o) and various sizes for the Llama model. While Llama-2 had an official code-specific variant, termed CodeLlama [1], the official Llama-3 models already perform well on code tasks, e.g., Llama-405B and GPT4-o reach a performance of 88.6 and 87.8 on MBPP Eval Plus, respectively [2]. We will be happy to experiment with additional models the reviewer thinks necessary, including the recently released OpenAI-o1 as suggested by reviewer xD3h.\\n\\n[1] Code Llama: Open Foundation Models for Code\\n\\n[2] The Llama 3 Herd of Models\\n\\n> Understudied Tokenization Impact\\n\\nThank you for this suggestion. We agree that tokenization is an important factor that can further improve performance. However, as we use strong CodeLMs that have been shown to perform well on coding tasks, we argue this is out of scope for our current work. Based on the reviewer\\u2019s comment, we added a discussion about tokenization in our Limitations section.\\n\\n> Suggestions to test Length Generalization\\n\\nThank you for this interesting suggestion, and please see our response to reviewer 5eaV regarding long-context optimizations. Mainly, we argue that the main bottleneck for additional progress on KT is reasoning rather than long-context abilities. Nevertheless, we agree that stronger long-context CodeLMs can perform better and believe it can be an interesting area for future work.\\n\\n> Limited Prompt Optimization\\n\\nThank you for this suggestion. We will add a CoT baseline for future versions. Please see our response to all reviewers about a CoT baseline for more details.\\n\\n> Live Benchmark Availability\\n\\nThank you for verifying, we plan to have a live leaderboard. We clarified this in the Reproducibility section in the updated paper.\\n\\n> Use of Code-Specific LLMs\\n\\nLlama-3.1-405B and GPT4-o have high performance on coding benchmarks [1] and perform better or on par with the models mentioned above. We will be happy to consider any models the reviewer thinks necessary including OpenAI-o1 as suggested by reviewer xD3h.\\n\\n[1] The Llama 3 Herd of Models\\n\\n> Training Details\\n\\nThe SeqCoder-1.5B and SeqCoder-8B models were trained for 20K and 10K steps respectively, as presented in Tab.5 in \\u00a7B.1. We clarified this in \\u00a7B.2 in the updated version.\\n\\n> Sequence Length and Token Count\\n\\nBecause our sequences are composed of numbers in range [0, 255], each element is a single token for the Llama models. Additionally, we use a comma as a separator between elements, so a sequence of length 16 is composed of 31 tokens (16 elements and 15 separators). For the LMiC baseline, we do not consider the cost of generating the separator. We further clarified this in \\u00a7B.2, Footnote.14 in the updated version.\\n\\n> Data Type Configuration for LLAMA-3.1-405B\\n\\nWe use the default official BF16 configuration from vLLM and will release code to reproduce our experiments.\\n\\n\\nWe thank the reviewer again for their helpful reviewer and constructive suggestions and will be happy to address any concerns the reviewer has during the discussion period.\"}", "{\"title\": \"Response to reviewer y77n (1/2)\", \"comment\": \"> The connection between Kolmogorov Complexity and Intelligence is not clear in the submission. If we take the popular code based benchmarks, and compute the size of the programs lets say at pass @ 10, then can we say that models which produce shorter programs are more intelligent?\\n\\nWe thank the reviewer for this interesting question. Compression, generation, and intelligence are closely tied [1, 2], and the ability of LMs to compress data has been recently shown to be linearly correlated with intelligence [3]. Many foundational theories of artificial intelligence are based on compression as well [4, 5]. Nevertheless, this does not mean that models that produce shorter programs for popular code benchmarks are more intelligent, and we do not argue that the Kolmogorov Complexity of programs should be used on these benchmarks, which is an interesting research question for follow-up work. In this work, we are concerned with the ability of CodeLMs to compress sequences of data. We clarified this point in the Limitations Section in the updated version of the paper.\\n\\n[1] The Hutter Prize\\n\\n[2] Language Modeling Is Compression\\n\\n[3] Compression Represents Intelligence Linearly\\n\\n[4] A Theory of Universal Artificial Intelligence based on Algorithmic Complexity\\n\\n[5] Machine Super Intelligence\\n\\n> When using LLMs on the application side, users are often interested in easy to use/understand programs, will producing shorter programs be useful in this case?\\n\\nThere is no theoretical guarantee that shorter programs (programs with lower Kolmogorov Complexity) are easier for humans to understand. While readability of programs is an important aspect of Code Generation, we argue it is out of scope for our work where the main emphasis is on data compression.\\n\\nWe believe that the ability of CodeLMs to generate programs in low-level languages that compress data in ways that are challenging for humans to come up with or even understand can be an advantage rather than a limitation of future AI systems, as this could potentially pave the path to better compressors. We clarified this point in the Limitations Section in the updated version of the paper. We further note, that if one uses a prior over programs that is trained on human code, then compressibility under the prior should result in code that \\u201clooks like\\u201d human-written code.\\n\\n> For the synthetic programs that are constructed as part of the dataset, is there a short proof for why they are the shortest length? Are you using some of the properties of the operations to assure that?\\n\\nAlthough formal guarantees on being the shortest program are impossible to obtain, even in theory, we made a significant effort to ensure that the target program in our program-sequence data is at least close to optimal by:\\n- Removing redundant operations - for example, we do not have a range_down operator to remove redundancy between *reverse(range(x))* and *range_down(x)*.\\n- Simplicity bias - when programs generate the same sequence, we only keep the shortest program.\\n\\nWe further discussed this point in \\u00a7A.2 in the updated version of the paper. While we do not guarantee that our programs are indeed the shortest and do not have an efficient method in mind that would enforce it, we argue that the combination of our efforts and the empirical results are sufficient for the scope of this work.\\n\\n> Generating the shortest program for any given sequence is NP-Complete so what does line 246 correspond to? Are we trusting the LLM to generate the shortest programs?\\n\\nLine 246 refers to the description of our prompted baselines, which are prompted to generate the shortest program for a given sequence. As we show in our experiments and analysis this is often not the case. We updated the phrasing of this line in the new version.\\n\\n> In terms of the paper writing, it would certainly help if you define the Kolmogorov test or present it as an algorithm.\\n\\nWe thank the reviewer for this suggestion and added a formal definition of KT in Alg.2 in \\u00a7A.3 in the updated paper.\\n\\n> For the LMs in Table 1 especially the Llama models, why was few shot prompt / chain of thoughts reasoning not considered as good baselines? For the synthetic tasks, I believe using CoT with few-shot examples could certainly improve performance. \\n\\nWe thank the reviewer for this suggestion and will add a CoT baseline (we refer the reviewer to our Response to all Reviewers regarding CoT and our response to reviewer xD3h regarding few-shot baselines).\\n\\n> It also seems that the SeqCoder-1.5B model is overfitting to the synthetic task instead of actually being intelligent. Would the claim here be that the 1.5B model is more intelligent than GPT-4o since it has a higher accuracy?\\n\\nWe do not claim that SeqCoder-1.5B is more intelligent than GPT4-o, because (a) it saw more relevant training examples, and (b) it does not perform better on natural data. We further clarified this in the updated version.\"}", "{\"title\": \"Response to authors\", \"comment\": \"I think the paper works on an important direction, and I am satisfied with the response. I have updated the score.\"}", "{\"title\": \"Response to all reviewers\", \"comment\": \"We thank the reviewers for their thorough reviews and constructive feedback. We are also thankful for their overall positive assessment of our paper.\\n\\n\\n>Response to all reviewers regarding a CoT baseline\\n\\n\\nReviewers 5eaV, xD3h, CMwz noted that they would like to see a chain-of-thought (CoT) baseline. We will be happy to include this baseline in the final paper. We also wanted to add that although CoT is a common method to improve performance, recent work found it can be challenging to apply it reliably for code generation tasks and it can also decrease performance [1,2]. Hence, we do not expect these experiments to affect the main findings of our work, and argue that the Kolmogorov-Test (KT) can be a helpful resource for future CodeLMs, as seemed to be agreed upon by the reviewers. Nevertheless, we will be happy to experiment with a CoT baseline in future versions. \\n\\n[1] To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning\\n\\n[2] What Makes Large Language Models Reason in (Multi-Turn) Code Generation?\\n\\n\\n> Updated version of the paper\\n\\nWe uploaded a new version of the paper. The main changes include:\\n- Addressing comments and suggestions by all reviewers.\\n- SeqCoder-8B results in Tab.1 based on comments by reviewers xD3h and 5eaV.\\n- Formal definition of KT in Alg.2 in the appendix, based on comment by reviewer y77n.\\n- We will also be happy to include experiments with OpenAI-o1 (based on suggestion by reviewer xD3h) and analysis for different operators (based on suggestions by reviewers ohAG and 1MaY) in future versions, although we are unsure we will have these and our experiments with the CoT baseline in time for the discussion period.\\n\\n\\nWe respond in detail to all comments in our individual response to the reviewers and will be happy to address any concerns the reviewers have during the discussion period.\"}", "{\"title\": \"Response to reviewer xD3h (1/2)\", \"comment\": \"> Although the benchmark is effectively designed around compression as a code generation problem, the prompts used in the zero shot evaluation of the models are quite open-ended compared to the structured and closed nature of the DSL design that has been used in the synthetic experiments.\\n\\nThank you for this suggestion. We will add experiments with a CoT baseline to future versions of the paper (please see our response to all reviewers regarding CoT baseline). \\n\\nWe will also be happy to add an in-context learning (ICL) baseline. However, we note that this may not be trivial to implement reliably, as it raises additional questions, mainly (a) which examples to include in the prompt in a way that will not bias the model towards specific operators, and (b) should we prompt the models to use our DSL (like the trained models) or Python programs (like our zero-shot prompted baselines). While we agree that adding examples for SOTA models is likely to improve accuracy for our synthetic sequences and will be an additional contribution, it is unclear it will be helpful for real sequences without annotating domain-specific examples, which can be challenging. We will be happy to explore this in future versions.\\n\\nWe are happy to receive additional suggestions from the reviewer and will be sure to include these experiments in future versions. One of the reasons we are excited about KT is that solving it will likely require many advanced techniques, including the ones you mention, thus driving progress for years to come.\\n\\n> While the assessment using real-world data is sensible, expecting the smaller 1.5B/8B model trained on synthetic data to generalize effectively to real-world data is quite ambitious.\\n\\nWe agree that generalization from synthetic to real distributions is a major challenge, which we discuss in our Limitations Section. We are hopeful our results can inspire future research in developing methods that tilt the synthetic distribution towards the real one, e.g., by filtering easy to detect synthetic examples or use of RL techniques. While we agree that lower model capacity can limit generalization, our results with two model sizes (1.5B and 8B, see updated Tab.1 in the new version) do not show better generalization for the larger model. Hence, we hypothesize that novel data generation methods and modeling KT as an RL task are exciting directions for future work. \\n\\n> Beyond the evaluation and training of models, it is highly unlikely that LLMs can be traditionally and reliably used for compression instead of deterministic methods like gzip. If the focus is on real data, realizing real-world data compression as a code generation problem that outperforms gzip is very challenging without better base models, improved prompt design, or better synthetic data design.\\n\\nThe long-term focus of KT is on real data and we clarified this point in our Conclusion Section in the updated version, thanks to the reviewer\\u2019s comment. The motivation for the KT benchmark is primarily to provide a very hard challenge for code generation research, requiring code understanding, reasoning and pattern recognition capabilities beyond current models. Secondarily, with sufficient progress and scale, progress on KT may eventually result in competitive compression methods. The probabilistic nature of LLMs is not an impediment, since one can draw multiple samples (or otherwise use test-time compute) to find short and correct programs at encoding time, it is possible to automatically encode prediction errors (so that only code length and not correctness becomes stochastic), and decoding is deterministic.\\n\\nWe would also like to ask the reviewer to defer judgment on whether it is possible to achieve competitive compression results; in the last decade or so, many impossible-seeming tasks (classifying imagenet with a neural network, building chatbots by scaling language models, etc.) had seemed intractable until they were solved. The space of all programs includes any other compression method and so it is not a question of \\u201cif\\u201d these methods can outperform, but \\u201chow\\u201d we can reach that goal. Our aim is to focus research effort in this direction using the KT benchmark.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe are glad to hear that you find the paper addresses an important direction and are satisfied with our response. Thank you again for your valuable feedback and thoughtful review.\"}", "{\"title\": \"Response to reviewer 5eaV\", \"comment\": \"> Though the breadth of the experiments is extensive, the Paper could have included experiments and analysis of popular prompting techniques that have been shown to increase the reasoning and coding capabilities of LLMs, such as Chain-of-Thought, Tree-of-Thoughts etc.\\n\\nThank you for this suggestion. We will add a CoT baseline to future versions (please see our response to all reviewers regarding a CoT baseline).\\n\\n\\n> As the paper's main idea is influenced by the Hutter Prize [1], it would be useful to use any of the compressors in the leaderboard [2] as an additional baseline.\\n\\nThank you for this suggestion. Because we use the same 1GB of Wikipedia data as the Hutter prize we can directly use submissions from their leaderboard as additional baselines for the text modality. We added a clarification regarding state-of-the-art results on the Hutter prize in the updated version of our work in \\u00a7B.2. We plan to create a leaderboard to KT, and will make sure to provide a reference from our leaderboard to the Hutter Prize. We will also be happy to experiment with tensorflow-compress on the other modalities if the reviewer thinks necessary.\\n\\n> Does any of the recent works on lifting the length constraint of LLMs help in Length generalization for SeqCoder? e.g. \\\"Efficient Streaming Language Models with Attention Sinks\\\".\\n\\n\\nThank you for this question. Our results in \\u00a75.2 show that models struggle on longer sequences, and accuracy is near zero for sequence lengths of 128. Because we experiment with Llama-3.1 models [1] that have been shown to perform well on long-context tasks (for completeness we added results with SeqCoder-8B, which is based on Llama-3.1-8B to Tab.1), we hypothesize that the main bottleneck for better performance on KT is due to reasoning, rather than long-context skills. However, in our analysis in \\u00a75.3 we see that current models sometimes fail to even repeat input sequences, suggesting that better attention mechanisms are also needed for good performance.\\n\\n[1] The Llama 3 Herd of Models\\n\\n\\nWe thank the reviewer again for their helpful feedback and are happy to address any additional concerns during the discussion period.\"}", "{\"summary\": \"This work presents a Kolmogorov test for code generation models.\\n\\nExperiments are performed on synthetic data, natural text from Wikipedia, audio sequences from LibriSpeech, and human DNA sequences from the GRCh38 assembly.\\n\\nStandard compression is compared with LLMs and code generation. Specifically, Gzip and LMiC are used as baselines, and compared with baselines that prompt open-weights (Llama-3.1) and closed-weights (GPT-4o) LLMs to generate the shortest Python program that produces the input, and also open-weight models trained on synthetic text and code.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The result of the open-weight models trained on synthetic text and code is impressive (84.5% accuracy),\\nand their compression rate outperforms existing baselines.\\n\\n2. Compression by code generation may serve as a useful additional scalable benchmark for LLMs.\\n\\n3. The paper is well-written, and the exposition is clear.\", \"weaknesses\": \"1. My main concern is that the results do not generalize at all from synthetic to real datasets (0 accuracy).\\n\\n2. In addition to synthetic data, the data used in this work includes natural text, audio, and human DNA sequences from the GRCh38 assembly represented in FASTA format. In protein structure prediction (PSP), sequences of proteins are commonly represented in FASTA format, and their 3D structure predicted. It would be interesting to see the compression results for protein sequences and their structures.\\nIt would be interesting to extend the real-world datasets to other data types.\\n\\n3. The synthetic DSL includes 3 initiators x 7 modifies x 2 filters x 3 merger functions. It would be interesting to see this action space extended and understand how sensitive the results are as a function of the action space.\", \"questions\": \"How sensitive are the results as a function of the action space?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer uxqq\", \"comment\": \"> Two points that are fundamental to this work have already been discussed in prior work\\n\\nWhile Language Modeling for compression has been previously explored, as far as we are aware we are the first to suggest compression as a code generation task for CodeLMs. We will be happy to reference additional works the reviewer thinks necessary.\\n\\n> Since compression is posited as a test of [coding] LM intelligence, it will be interesting to examine the other dimensions of an LM's \\\"intelligence\\\" that correlate with its compression efficacy. A potential experimental set-up to this end could involve fixing a base model and measuring the performance of its variants which have been fine-tuned on disparate tasks such as math reasoning, code, etc., on the KoLMogorov Test.\\n\\nThank you for this suggestion. In the paper, we experiment with two model families (Llama and GPT4) and various sizes for the Llama model. While Llama-2 had an official code-specific variant, termed CodeLlama [1], the official Llama-3 models already perform well on code [2], and as far as we are aware our Llama-3.1 baselines are competitive with other models in their respective sizes. We will be happy to experiment with additional models the reviewer thinks necessary, including the recently released OpenAI-o1 as suggested by reviewer xD3h.\\n\\n[1] Code Llama: Open Foundation Models for Code\\n\\n[2] The Llama 3 Herd of Models\\n\\n> The observations in this work when fine-tuning small LMs on synthetic instruction sets for compression tasks - appear to echo other works from the math-reasoning domain, where extensive fine-tuning allows a relatively smaller model to outperform a larger base model (on tasks such as GSM8k), but without any generalized improvements.\\n\\nThank you for this suggestion, we clarified this point and added references in the Related Work section in the updated paper, and will be happy to reference additional works the reviewer thinks necessary.\\n\\n\\nWe thank the reviewer again for their helpful review and constructive feedback and will be happy to address any concerns the reviewer has during the discussion period.\"}", "{\"title\": \"Message to reviewers before end of discussion period\", \"comment\": \"Dear reviewers,\\n\\nWe wanted to thank you again for your reviews, which were very helpful in improving our work. If possible, we would be very happy to know whether all your concerns have been addressed and to answer any follow-up questions during the time remaining for the discussion period.\"}", "{\"comment\": \"Dear reviewer,\\n\\n\\nWe are glad to hear our clarifications were helpful and thank you again for your helpful feedback and thoughtful review.\"}", "{\"summary\": \"This paper presents two contributions to the field of language model (LM) evaluation and training:\\n\\n* KoLMogorov Test: A proposed test for language models grounded in compression principles with several benefits. Specifically, there is a preponderance of real-world examples (e.g. audio, text, DNA, or any other modality data that can be represented as bit sequence ) that can used to curate an evaluation set of desired degrees of hardness that cannot be otherwise used for fine-tuning LMs - preventing benchmark hacking. Also, this method allows for computationally cheap and reliable automated evaluation that can measure both the generation accuracy and quality, which is traditionally a bottleneck in language model evaluation setups.\\n* Synthetic instruction set for fine-tuning LMs for the compression task: This work also turns this test into an LM post-training task and provides a framework for generating a synthetic instruction set to enhance LMs' compression capabilities.\\n\\nAdditionally, this work provides interesting insights on LM fine-tuning:\\n1. Specifically, small LMs extensively fine-tuned on a synthetic instruction set outperform state-of-the-art prompted models and classical baselines (gzip algorithm).\\n2. that such performance gains by small fine-tuned models on synthetic data do not translate to the performance gains on real-world data\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The authors have considered datasets of different modalities and degrees of hardness and analyzed state-of-the-art models from both open and closed-source spaces, making a convincing case for KT as a test for code-generation models. This test has the additional benefits of the corresponding evaluation sets not being directly usable for fine-tuning and the preponderance of hard examples in natural data sources.\\n\\nThe various ablations presented in this work comprehensively capture the limited performance of these SOTA LMs on the task of compression, which will be helpful to the broader community.\\n\\nThe framework for synthetic data generation proposed in this work could be used to augment instruction sets towards large-scale post-trainings of LMs aimed at generalized gains.\", \"weaknesses\": \"Two points that are fundamental to this work -\\n1. the efficacy of code-generation language models at compression tasks and \\n2. some of the observations around fine-tuning LM on compression tasks \\n\\nhave already been discussed in prior work, as the authors of this work aptly reference. This limits the novelty of the present work.\", \"questions\": \"Since compression is posited as a test of [coding] LM intelligence, it will be interesting to examine the other dimensions of an LM's \\\"intelligence\\\" that correlate with its compression efficacy. A potential experimental set-up to this end could involve fixing a base model and measuring the performance of its variants which have been fine-tuned on disparate tasks such as math reasoning, code, etc., on the KoLMogorov Test. If the KoLMogorov Test exhibits a good correlation with the other intelligence dimensions, its benefits may allow it to subsume other benchmarks. KT may also lend itself as a reward function during LM preference optimizations.\\n\\nThe observations in this work when fine-tuning small LMs on synthetic instruction sets for compression tasks - appear to echo other works from the math-reasoning domain, where extensive fine-tuning allows a relatively smaller model to outperform a larger base model (on tasks such as GSM8k), but without any generalized improvements. If the authors see any fine-grained parallels here - this discussion would be a useful contribution to the community.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper talks about compression of data using LLMs via code generation. The smallest possible way to compress a sequence of data called the Kolmogorov compression is uncomputable. Therefore, the paper tests the capability of code generating LLMs to generate a program that can produce the sequence, thus compressing it. They argue that achieving this requires the models to reason, plan, and search for complex patterns within the sequence to enhance compression performance. The paper evaluates state-of-the-art models, such as Llama-3.1-405b and GPT-4-o, in a zero-shot manner on real-world compression data sources like audio, text, and DNA, demonstrating that these models perform poorly compared to the deterministic gzip baseline.\\u00a0Additionally, the authors assess the models on a synthetic benchmark of (sequence, program) pairs created using a custom domain-specific language (DSL). On this synthetic distribution, they trained a code generation model that outperformed gzip with a uniform prior over the custom DSL. However, they also showed that these models don\\u2019t generalize to real data. The paper highlights significant differences between the distributions of real and synthetic sequences, with real sequences exhibiting more repetitions. Through ablation studies, the authors further demonstrate that the trained models do not generalize well to longer sequences.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The main strengths of the paper include :\", \"The paper studies the problem of compression as a code generation problem with current code language models. This is quite novel in the context of language models as previous works have primarily used language models for compression through arithmetic coding. This new framing of the task surely requires the model to understand patterns in the sequence and reason about how to segment the sequence into different subsequences.\", \"The paper proposes a dynamic benchmark comprising of (sequence, program) pairs which test model\\u2019s capability at compressing sequences which are generated using a custom domain specific language (DSL) comprising of different functions that can be realized as python functions. The paper shows that codeLMs trained on this synthetic benchmark can outperform deterministic compression methods like gzip when efficient compression priors are applied to generated functions. Notably, zero-shot prompted state-of-the-art models like GPT-4-o and Llama-3.1-504b fail to reproduce input sequence for more than 66% and 45% of the time, respectively. The dynamic nature of the benchmark, due to its dependency on the customizable DSL, prevents existing LLMs from gaming or memorizing it, as it can generate different examples of varying complexity that the model has not previously encountered during training.\", \"The paper\\u2019s evaluation of code generation based compression on various real world data sources like audio, text and DNA is also noteworthy. In the literature of information theory and communication, these domains have been thoroughly studied for different compression based algorithms. Although none of the models discussed in the paper outperformed the gzip baseline for compression, this finding highlights an interesting phenomenon that is widely discussed in the literature: the numerical reasoning and counting limitations of these LLMs.\"], \"weaknesses\": [\"The major weaknesses of the paper include :\", \"Although the benchmark is effectively designed around compression as a code generation problem, the prompts used in the zero shot evaluation of the models are quite open-ended compared to the structured and closed nature of the DSL design that has been used in the synthetic experiments. This leads to an unfair comparison with the synthetic model.\\u00a0 With better prompt-designs, with better in-context examples, inclusion of DSL definitions in the prompt, ReACT prompts with verification in the loop, these SOTA models can achieve good performance. This is particularly relevant given that feedback and execution improved the accuracy of the trained model on audio-based data, and the majority of errors in SOTA models were execution-based.\", \"While the assessment using real-world data is sensible, expecting the smaller 1.5B/8B model trained on synthetic data to generalize effectively to real-world data is quite ambitious. The distribution of real data significantly differs from synthetic data; the former has more repetitions, while the latter exhibits more complexity in terms of interleaving sequences. The synthetic data used by the authors consists of smaller sequences and smaller length programs whereas real-world data with more repetitions would result in longer generated programs - something the model wasn't trained on, leading to a significant drop in accuracy. This decrease in accuracy for the trained synthetic model could also be attributed to model capacity, as similar models like LLaMa-3.1-8B also show lower accuracy on real-world data.\", \"Beyond the evaluation and training of models, it is highly unlikely that LLMs can be traditionally and reliably used for compression instead of deterministic methods like gzip. LLMs are inherently probabilistic models and cannot generate lossless compressions without hallucinations or mistakes in a zero-shot setup. The motivation behind designing such a complex benchmark around the Kolmogorov test is unclear. If the focus is on synthetically generated data/benchmark, small 1.5B models trained on 10k-1M perform well on them with some supervised training. However, if the focus is on real data, realizing real-world data compression as a code generation problem that outperforms gzip is very challenging without better base models, improved prompt design, or better synthetic data design.\"], \"questions\": [\"For the experiments, the following should have been additionally explored by the authors :\", \"The authors didn\\u2019t report the effect of execution and feedback on the SOTA LLMs. From Table 3, the authors have just presented the ablation on the trained synthetic model.\", \"The authors didn\\u2019t use models like openai-o1 which can reason before emitting any code and have shown to have good performance on code based tasks. It would be interesting how these inference based reasoning models perform on this task ?\", \"The authors haven\\u2019t reported the accuracy and precision of the Seqcoder 8B 1M model on synthetic data. Does starting from a better base model help in accuracy ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to authors\", \"comment\": \"I have read the authors\\u2019 response and have chosen to keep my current score.\"}", "{\"summary\": \"The paper introduces a benchmark designed to evaluate LLM's capabilities in compression as an indicator of intelligence. The Benchmark requires models to generate the shortest possible python program that reproduces an input data sequence, aligning with the concept of Kolmogorov complexity. The authors assess current language models' performance in KT using data from text, audio, and DNA, as well as synthetic data with specific patterns. Results show that models like GPT4-O and LLAMA-3.1-405B struggle with both natural and synthetic data. The authors also develop a Domain Specific Language (DSL) to create program-sequence pairs for supervised training, leading to trained models that achieve lower compression rates than traditional methods on synthetic data but perform poorly on real data. Authors have conducted further experiments and comparisons with different baselines including GZIP, \\\"Language Model is Compression\\\", etc. The paper suggests that further innovations are necessary for models to generalize well on real-world data within the KT framework.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Extensive amounts of experiments conducted and insights gathered from it are the major strengths of this paper. Experiments related to trained SeqCoder are particularly helpful in highlighting the generalization issues and how much it could be overcome with synthetic data.\\n2. Clear articulation of the DSL Language and of the techniques used in generating the data. The DSL Language and its complexity seems sufficient for benchmarking SOTA LLMs.\\n3. Easy to read, with key pieces of information and additional analysis generally emphasized appropriately.\\n4. Setting aside the question of whether LLMs will be widely used to generate code to compress data, this benchmark highlights current limitations of SOTA LLMs, even with basic patterns such as repetition.\", \"weaknesses\": \"1. Though the breadth of the experiments is extensive, the Paper could have included experiments and analysis of popular prompting techniques that have been shown to increase the reasoning and coding capabilities of LLMs, such as Chain-of-Thought, Tree-of-Thoughts etc. Given the popularity of these techniques and how prevalent CoT is in all major reasoning benchmarks, including at least one prompting technique would have been useful in the analysis.\\n\\n2. As the paper's main idea is influenced by the Hutter Prize [1], it would be useful to use any of the compressors in the leaderboard [2] as an additional baseline. Specifically, compressors that use LSTM (such as tensorflow-compress) would make make of an interesting baseline for the benchmark.\\n\\n[1] - http://prize.hutter1.net\\n\\n[2] - http://mattmahoney.net/dc/text.html\", \"questions\": \"See weaknesses for questions\\n\\nAlso, does any of the recent works on lifting the length constraint of LLMs help in Length generalization for SeqCoder? e.g. \\\"Efficient Streaming Language Models with Attention Sinks\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
C3TrHWanh5
Hessian-Free Online Certified Unlearning
[ "Xinbao Qiao", "Meng Zhang", "Ming Tang", "Ermin Wei" ]
Machine unlearning strives to uphold the data owners' right to be forgotten by enabling models to selectively forget specific data. Recent advances suggest pre-computing and storing statistics extracted from second-order information and implementing unlearning through Newton-style updates. However, the Hessian matrix operations are extremely costly and previous works conduct unlearning for empirical risk minimizer with the convexity assumption, precluding their applicability to high-dimensional over-parameterized models and the nonconvergence condition. In this paper, we propose an efficient Hessian-free unlearning approach. The key idea is to maintain a statistical vector for each training data, computed through affine stochastic recursion of the difference between the retrained and learned models. We prove that our proposed method outperforms the state-of-the-art methods in terms of the unlearning and generalization guarantees, the deletion capacity, and the time/storage complexity, under the same regularity conditions. Through the strategy of recollecting statistics for removing data, we develop an online unlearning algorithm that achieves near-instantaneous data removal, as it requires only vector addition. Experiments demonstrate that our proposed scheme surpasses existing results by orders of magnitude in terms of time/storage costs with millisecond-level unlearning execution, while also enhancing test accuracy.
[ "machine unlearning; certified data removal; privacy" ]
Accept (Poster)
https://openreview.net/pdf?id=C3TrHWanh5
https://openreview.net/forum?id=C3TrHWanh5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x7QlGELNMR", "sYmM78CHxa", "remFUYTDOs", "hgLB9durX1", "gaUkxPFIug", "eMjk4ya5rK", "co3UA1mLaU", "baXXDHEzzB", "b81a3lzKtu", "ZyvXeVRfg4", "Vrspt8LQ5a", "VMKltueiIX", "In4NQjEzKl", "EFaVAeJtiv", "DnAd62p2L8", "DkXcwODrdn", "CUPFQqz3Ui", "8BybjqzwEU", "19UrQtygiY", "178ZzC9wci", "0AK5c6Yg2g" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732281378685, 1730875008103, 1732167127879, 1732433005914, 1732166911805, 1732702035489, 1737523417477, 1732433135297, 1731089647287, 1730782401847, 1732561140181, 1732519991581, 1732307969990, 1732545179505, 1732167063123, 1732168342881, 1730378324407, 1732168259451, 1732166983192, 1732537633932, 1734614407039 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission830/Reviewer_t2wR" ], [ "ICLR.cc/2025/Conference/Submission830/Reviewer_CNQV" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Submission830/Reviewer_kjAJ" ], [ "ICLR.cc/2025/Conference/Submission830/Reviewer_ia1D" ], [ "ICLR.cc/2025/Conference/Submission830/Reviewer_ia1D" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Submission830/Reviewer_kjAJ" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Submission830/Reviewer_t2wR" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Submission830/Authors" ], [ "ICLR.cc/2025/Conference/Submission830/Reviewer_CNQV" ], [ "ICLR.cc/2025/Conference/Submission830/Area_Chair_p2KV" ] ], "structured_content_str": [ "{\"title\": \"Response to Rebuttal\", \"comment\": \"I appreciate the authors' detailed response. I appreciate the explanation regarding why HVP cannot be utilized by other unlearning methods like NS and IJ. I also appreciate the explanation of the tradeoff between approximation error and model training. The experiments on the model's performance under different step sizes are very thoughtful.\"}", "{\"summary\": \"The paper proposes a new unlearning algorithm which extracts second-order information in a Hessian-free manner without the need to assume strong convexity. The key idea is to track and remove the impact of a specific sample in the entire update trajectory of the model, which is called affine stochastic recursion between the retrained and learned models in this work. It provides theoretical guarantees on generalization, deletion capacity, and space/time complexities. Experiments are conducted to demonstrate the superiority of the proposed algorithm.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Strengths include:\\n1) The paper proposes affine stochastic recursion to pinpoint the overall impact of a specific sample on the learned model.\\n2) It makes the unlearning process efficient via the Hessian-free computation.\\n3) It provides theoretical guarantees that cover generalization, deletion capacity, and space/time complexities. \\n4) Experiments are provided to demonstrate the advantages of the proposed unlearning algorithm.\", \"weaknesses\": \"Weaknesses include:\\n1) The recollection matrix M looks incorrect, based on the derivation given in Appendix C.1\\n2) Limitations of the algorithms are not discussed. For example, the proposed algorithm may not perform well on large-scale datasets given the quadratic time complexity in data size.\", \"questions\": \"1) Could you explain why NS and IJ can't utilize HVP? Although they involve Hessian inverse, it could be approximated by using like least-square which essentially does HVP as well.\\n2) Regarding the correlation metric on loss change, could you tell us what stopping rule you use while calculating those loss changes across different algorithms? I feel this is important for gauging performance.\\n3) It is unclear whether fine-tuning was used for each of the algorithms in experiments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors (2)\", \"comment\": \"**W4:** *Presentation should be improved.*\\n\\nThanks for your careful checks. We tried our best to improve the manuscript and made some changes to the manuscript. And here we did not list the changes but marked in blue in the revised paper. Below are our detailed responses.\\n\\n>**W4.1:** *Considering population risk (1) and empirical risk (2); Author could remove (2) for simplicity.*\\n>\\n>>We respectfully disagree with the suggestion to remove the definition of empirical risk in equation (2). Providing rigorous definitions of both population risk (1) and empirical risk (2) is a common practice in many works, including the unlearning literature **R1, R2, R3**. These definitions also facilitate the analysis of generalization guarantees and the empirical risk minimizer in the subsequent sections.\\n>\\n>**W4.2:** *The (3) should be rewritten since linear scaling rule is introduced.*\\n>\\n>>We did not violate the linear scaling rule in **R9**. We wrote $w_{e,b+1} = w_{e,b} + \\\\frac{\\\\eta_{e,b}}{|\\\\mathcal{B} _ {e,b}|} \\\\sum \\\\nabla \\\\ell(w_{e,b})$ because the definition of batch size $|\\\\mathcal{B} _ {e,b}|$ is needed in later analysis.\\n>\\n>**W4.3:** *If intent is for the product to go in reverse order in line 10 of algo. 1, the notation should ideally be clarified in the text.*\\n>\\n>> We greatly appreciate the reviewer\\u2019s suggestion. To avoid any misunderstanding, we have revised the manuscript and added the necessary explanation in Appendix C.1.\\n>\\n>**W4.4:** *In your Lemma 2, I don't think there exists a valid $G$.*\\n>\\n>> Thanks for your comment. We would like to explain that we can obtain a valid $G$ typically by directly computing the norm of the gradient. If we need gradient clipping after computing the norm to prevent divergence, then $G$ can be the clipping threshold.\\n>\\n>**Q4.4:** *When removing the $u _ j$, why the normalization constant remain unchanged?*\\n>\\n>> The constant remains unchanged to keep the ratio between the step size and the batch size constant. The specific reason is as follows:\\n>\\n>> - From the perspective of the linear scaling rule, the same ratio of $\\\\eta$ to $|\\\\mathcal{B} _ {e,b}|$. ensures there is no loss of accuracy.\\n>> - From the perspective of reweighting in Appendix C.1, this ensures that the weighting of the remaining samples during each model update remains consistent.\\n>\\n>**Q4.5:** *How can you ask a learning algorithm within the (solution) parameter space?*\\n>\\n>> Thank you for your suggestions. We have fixed the typo in the definition and changed **L178** to $\\\\Omega: \\\\mathcal{Z}^n \\\\rightarrow \\\\mathcal{W}$.\\n>\\n>**W4.6:** *Typo in (1) **L156** for total epochs, batches (2) **L289** for $B=\\\\left\\\\lceil\\\\frac{n}{|\\\\mathcal{B}|}\\\\right\\\\rceil$ (3) **L1562** for Figure 4 caption.*\\n>\\n>>Thanks for your correction. We have \\n>\\n>>- updated **L156** and Algorithm, changing $E$ and $B$ to $E+1$ and $B+1$.\\n>>- added ceiling symbol to $B$.\\n>>- removed redundant \\\"comparison\\\" in Appendix Fig. 4 caption.\"}", "{\"title\": \"Reply to Reviewer kjAJ\", \"comment\": \"We appreciate the reviewer\\u2019s feedback, especially the suggestions regarding the description of prior works. We agree with the perspective that a convex loss function can be reduced to a problem with a strongly convex loss. To avoid similar confusion as raised in **Q1**, we have revised the manuscript to modify the assumption of strong convexity in prior work to convexity in the abstract (**L17**), introduction (**L73**) as suggested by the reviewer, as well as in the related works section (**L130**). Additionally, we have followed the reviewer's suggestion to add \\\"in $\\\\mathbf{w}$\\\" to Assumption 1 (**L299**) for the statement to be precise.\\n\\nWe also appreciate the reviewer\\u2019s suggestion to highlight the contributions of our Hessian-Free approach, which was omitted and missed in our initial manuscript. In the revised version, we have explicitly emphasized the advantages of our Hessian-Free approach and its comparison to Hessian-based methods in handling multiple deletion requests. This revision better highlights the contribution of our work, as suggested by the reviewer. The specific revisions are as follows:\\n\\n> - We added more descriptions of previous works in the Introduction **(L68, L90)**.\\n> - We included the advantage of Hessian-free methods in handling multiple deletion requests online, which previous Hessian-based works cannot achieve, in Theorem 1 (Additivity) **(L252-254)**.\\n> - we explained why previous Hessian-based work fails to use HVP in Section 4.4 **(L370-L375)**.\\n\\nIf the reviewer has any further **Q**uestions or **W**eakness that need to be addressed, we would be glad to provide any clarification.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We sincerely appreciate your support and constructive suggestions. We have made every effort to revise the manuscript based on your valuable and constructive suggestions.\\n\\nWe provide our responses to the comments, denoted by **[W]** for weaknesses, and **[Q]** for questions. The references, denoted by **[R]**, are provided in the General Response. And we use **[L]** to represent lines in our manuscript.\\n\\n**W1:** *Qualifying the generality of complexity and highlighting this as an assumption similar to Line 299 would be beneficial and avoid misleading readers.*\\n\\n>Thank you for your insights. Based on the reviewer's suggestions, we have tried our best to revise the manuscript for descriptions and assumptions similar to **L299**, and provide additional references, such as **R10**, to demonstrate that the ratio of $E$ to $B$ is generally small. Additionally, the feedback has inspired us to provide a more precise description of the impact of $E$ and $|B|$ on complexity in our revised version. In fact, the ratio of $E$ to $|B|$ is much smaller than $d$. Therefore, our method works for most scenarios, except in the cases where SGD is over-iterating the epoch to levels in the dimension $d$.\\n\\n**W2:** (1) *Shalev-Shwartz et al. (2009) assumes i.i.d. of samples, which should translate to i.i.d. of the set.* **AND** (2) *The term C4 in Equation 74 is small enough which conceptually (roughly) translates to whether $w _ {E,B}$ is a good empirical minimizer or not. It seems problematic to assert that the number of epochs required for convergence to empirical risk minimizer is the same order of magnitude as batch size.*\\n\\n>- For (1): We greatly appreciate the constructive suggestions from the reviewer, and we have completed the missing details in Lemma 9 in the revised version.\\n>\\n>- For (2): We do not require $w_{E,B}$ to be an empirical risk minimizer in Theorem 6 of Section 4.2. We appreciate the reviewers for spending time on the proofs in our paper and for outlining our proof sketch in the review. \\n>\\n> We would like to further clarify that (a) represents the excess risk between the retrained empirical risk minimizer $\\\\hat{w}^{-U}$ and ${w}^*$, while (c) denotes the optimization error between the retrained model ${w}^{-U} _ {E,B}$ and $\\\\hat{w}^{-U}$. Using (a) and (c), we can bound the excess risk of the retrained model ${w}^{-U} _ {E,B}$ relative to ${w}^*$, Note that at this point, we no longer require convergence to the empirical risk minimizer. Furthermore, given (b) and (d), which represent the approximation error and noise of the unlearning model $\\\\tilde{w}^{-U} _ {E,B}$, we can bound the excess risk of the unlearned model $\\\\tilde{w}^{-U} _ {E,B}$ relative to ${w}^*$ in Theorem 6. Therefore, in Theorem 6, we do not need the term C4 to be sufficiently small as it also satisfies $\\\\mathcal{O}(\\\\rho^n)$, and we do not require $w _ {E,B}$ to be an empirical risk minimizer. This is a key distinction that sets our Theorem 6 apart from previous works, as the generalization guarantees in earlier works are derived through implicit assumption about the empirical risk minimizer. \\n\\n**Q1:** *Convexity suffices in **R1** because unlearning for a convex loss can be reduced to a problem with strong convex loss.* **AND** *Could the authors clarify what assumptions are needed in previous work?*\\n\\n>- First of all, previous algorithms requires strong convexity. The reason that previous work requires strong convexity is to guarantee that the Hessian is positive definite, thus ensuring its invertibility. Simply claiming to require convexity is insufficient because a semi-positive definite Hessian still cannot guarantee invertibility, although **R1** can use regularization techniques (such as L2 regularization) to make a logistic regression loss function strongly convex.\\n>- We thank the reviewer's comments and would like to clarify that the assumptions in Assumption 1 is consistent with previous works **R1, R2**. In addition to this, previous works require the implicit assumption of a unique empirical risk minimizer.\\n\\n**Q2:** *Is the loss assumed to be jointly convex in both $z$ and $w$?*\\n\\n>All assumptions in **L299** are made solely with respect to $w$. We appreciate the reviewer\\u2019s valuable question and we provide a more detailed explanation of the definition of Assumption 1 in the appendix (**L1389-L1403**) in revised manuscript.\"}", "{\"title\": \"Acknowledgment of the Reviewer\\u2019s Constructive Comments\", \"comment\": \"We appreciate the reviewer\\u2019s understanding of the contribution our unlearning method makes in bridging the gap in implementing non-convex and non-convergence conditions. We also sincerely thank the reviewer for improving the rating. Below is our response to address your remaining concerns.\\n\\n> - We acknowledge that, while we propose a new method that does not rely on convexity, our analysis builds upon the theoretical framework of previous works for comparison purposes. These prior certified unlearning works, both in terms of methodology and theoretical analysis, were limited to convex settings, which is why much of our theoretical results rely on convexity\\u2014except for our unlearning-related conclusions. We briefly discuss the limitation in the revised manuscript **(L308, L934)**. In the future, we plan to extend the existing theoretical framework, though this may require revisiting existing methods and frameworks, or introducing new assumptions to address more complex scenarios. \\n>\\n> - In response to the reviewer\\u2019s concern on **W4.4**, we have, following the reviewer\\u2019s suggestion, stated Assumption 1 before Lemma 2 in order to ensure that the upper bound holds.\\n\\n If the reviewer has any further **Q**uestions or **W**eakness that need to be addressed, we would be glad to provide any clarification.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reply to Reviewer t2wR\", \"comment\": \"We appreciate the meaningful discussion with the reviewers, which has helped us further clarify and enhance the contributions of our work that were previously overlooked. Since Reviewer CNQV also raised questions regarding the HVP and Reviewer kjAJ suggested this as a contribution worth highlighting, we added new descriptions to the revised manuscript to further emphasize our contributions. The specific changes are as follows:\\n\\n> - We added more descriptions of previous works in the Introduction **(L68, L90)**.\\n> - We included the advantage of Hessian-free methods in handling multiple deletion requests online, which previous Hessian-based works cannot achieve, in Theorem 1 (Additivity) **(L252-254)**.\\n> - we explained why previous Hessian-based works fail to use HVP in Section 4.4, **(L370-L375)**.\\n\\nIf the reviewer has any further **Q**uestions or **W**eakness that need to be addressed, we would be glad to provide any clarification.\"}", "{\"summary\": \"This paper addresses the challenge of certified unlearning, where models are required to forget information at the request of data providers. The authors introduce a novel approach leveraging details tracked during model training to approximate how the training process would have proceeded without the data marked for deletion. Notably, the proposed method circumvents the need for full Hessian computations or inversion by using Hessian-vector products for second-order information. Additionally, it does not assume that the original model is an empirical risk minimizer. The authors' theoretical analysis argues that their method offers enhanced unlearning guarantees, efficient storage and precomputation, faster data deletion, and improved generalization bound, particularly for overparameterized models. Empirical results support the approach's claim of rapid unlearning execution.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The idea of tracking algorithmic updates during training to facilitate unlearning is straightforward yet impactful, with sound theoretical analysis for unlearning privacy guarantee and descent empirical evaluation.\\n2. Removing the assumption that the initial learned model must be an empirical risk minimizer is significant for the practical applicability of the method.\\n3. The claimed efficiency improvements are particularly relevant for overparameterized deep models where the batch size is comparable to the number of training epochs, probably covering a substantial portion of common machine learning models.\", \"weaknesses\": \"1) The claim regarding the efficiency of precomputation and storage (lines 378\\u2013387) hinges on two critical assumptions: (i) the model's parameter size $d$ is significantly greater than the training data size $n$, and (ii) the number of epochs $E$ is of the same order as the batch size $|B|$. While the authors state (ii) as typical (\\\"Typically, $E$ and $|B|$ are of the same order\\\"), a reference would strengthen this claim. Moreover, the assumption may not hold in scenarios such as online learning or streaming applications. Qualifying its generality and highlighting this as an assumption similar to Line 299 would be beneficial and avoid misleading readers.\\n\\n2) The generalization analysis in Section 4.2 considers strong convex loss functions and focuses on excess risk bound. Excess risk bound consist of two terms: the first term comes from (a) the excess risk of the empirical risk minimizer, and the second term comes from (b) unlearning error, (c) optimization error, and (d) the noise for obfuscation (Line 14 of Algorithm 1). There are at least two problems with this analysis and theorem statement.\\n\\n2.1 In strong convex settings, the assumption that $E$ and $|B|$ are comparable can seem more questionable, especially as the excess risk bound expressed in big-O notations (Line 319) uses this assumption. It might be okay to make this assumption for controlling (b), but $E$ and $B$ also affects (c), i.e. whether the term $C_4$ in Equation 74 is small enough, which conceptually (roughly) translates to whether $w_{E, B}$ is a good empirical minimizer or not. It seems problematic to assert that the number of epoch required for convergence to empirical risk minimizer is the same order of magnitude as batch size. \\n\\n2.2 The first term comes from Lemma 9, which cites Shalev-Shwartz et al. (2009). The latter assumes i.i.d. of samples, which should translate to i.i.d. of the set $U$.\", \"questions\": \"1. In the abstract (Line 017) and introduction (Line 073), the authors claim that previous work requires strong convexity, while in Line 66 the authors said previous work requires convexity. It seems to me that convexity suffices in Sekhari et al. (2021) because unlearning for a convex loss function can be reduced to a problem with strong convex loss. Could the authors clarify what assumptions are needed in previous work?\\n2. In Assumption 1 (Line 299), is the loss assumed to be jointly convex in both $z$ and $w$, only $w$, or only $z$? Similar questions for Lipschitzness and smoothness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a Hessian-free approach to certified machine unlearning that aims to improve computational efficiency and scalability in removing specific data points from a model without full retraining. Instead of relying on direct Hessian computations, which are computationally prohibitive in high-dimensional and non-convex settings, the method approximates the impact of data removal through affine stochastic recursions that analyze model update discrepancies. The method achieves computational gains, reducing unlearning time to $\\\\mathcal{O}(md)$ and storage to $\\\\mathcal{O}(nd)$, outperforming existing second-order methods.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Both online learning and certified unlearning are highly significant research areas in machine learning.\", \"The paper is the first to introduce a Hessian-free approach to certified unlearning, which is a notable change from the dominant reliance on Hessian-based methods in second-order unlearning.\", \"Experimental validation shows unlearning runtime in milliseconds, robust generalization guarantees, and privacy improvements against membership inference attacks with an added noise mechanism.\", \"The authors also include relevant code and pseudo-code in the appendix, which is helpful for reproducibility.\"], \"weaknesses\": [\"1. The paper\\u2019s theoretical guarantees hinge on assumptions of convexity and smoothness (Assumption 1), which restricts the scope of the analysis to settings that are arguably idealized for real-world applications involving non-convex (e.g. deep learning) models. Thus, the authors likely overstate their contributions.\", \"2. I would say, Hessian-free optimization for second-order (and even higher-order) algorithms is an active research area. Beyond the Machine Unlearning domain discussed in detail on page 3 and in Appendix A, the authors should connect the ideas presented here with a broader body of Hessian-free optimization work in classical parametric optimization. A numerical comparison, if feasible, is also encouraged, as this may better highlight the novelty of this paper\\u2019s contributions beyond merely applying existing Hessian-free methods to the Machine Unlearning field.\", \"3. While the authors present some experimental results, they lack diversity in dataset selection and only test the approach on ~5 datasets. The efficacy of this unlearning mechanism remains unclear in large-scale or high-dimensional applications where computational efficiency is critical.\", \"4. The writing quality of this work is limited and the presentation should be improved, for instance\", \"the (2) is simply one case of (1) by replacing the $\\\\mathcal{D}$ with empirical distribution. i.e., the authors could remove (2) for simplicity, or just start from the (1)\", \"I think the (3) should be written as $\\\\mathbf{w}\\\\_{e, b+1} \\\\leftarrow \\\\mathbf{w}\\\\_{e, b}-\\\\eta\\\\_{e, b} \\\\sum\\\\_{i \\\\in \\\\mathcal{B}_{e, b}} \\\\nabla \\\\ell\\\\left(\\\\mathbf{w}\\\\_{e, b} ; z\\\\_i\\\\right),$ since the linear scaling rule (Goyal et al. 2017) is introduced later in line 163.\", \"line 156 and algo. 1: as your notation, the total epochs and batches would be $E+1$ and $B+1$. So as your complexity in section 4.4\", \"when removing the $u\\\\_j$, why the normalization constant of $\\\\eta$ in your (5) is $ \\\\mathcal{B}\\\\_{e, b(u\\\\_j)}$ instead of $ \\\\mathcal{B}\\\\_{e, b(u\\\\_j)}-1$?\", \"definition 1: how can you ask a learning algorithm within the (solution) parameter space $\\\\mathcal{W}$? Please revise the definition or rephrase your wording\", \"in your lemma 2, I don't think there exists a valid $G$ such that $G=\\\\max \\\\left\\\\\\\\|\\\\nabla \\\\ell\\\\left(\\\\mathbf{w}_{e, b} ; z\\\\right)\\\\right\\\\\\\\|<\\\\infty$. This should be a consequence of the assumption that the grad of $l$ is uniformly upper bounded. Otherwise, this could be derived by your assumption 1, which is imposed later\", \"Theorem 4 should be $B=\\\\left\\\\lceil \\\\frac{n}{|\\\\mathcal{B}|} \\\\right\\\\rceil$\", \"if the intent is for the product to go in reverse order in line 10 of algo. 1, i.e. $\\\\prod_{k=E}^e \\\\prod_{b=B-1}^{b(u)+1}$, the notation should ideally be clarified in the text to avoid misunderstandings\", \"font size in figures e.g., 1, 2, 7, is too small and needs to be enlarged for clarity\", \"figure 4 caption: \\\"a comparison comparison between\\\"\"], \"questions\": \"Please refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your rebuttal\", \"comment\": \"I understand that your new algorithm can be applied to *both* convex and non-convex settings, but much of the analysis is built on convexity, which does present a limitation. It would be nice if you could further discuss that in your work.\\n\\nRegarding **W4.4**, my main point is that Assumption 1 should be stated before Lemma 2 rather than in its current position. Otherwise, the upper bound may not hold (unless you explicitly assume it, but the lemma appears to be presented as a result rather than a condition).\\n\\nThat said, given the overall responses, I have revised my stance.\"}", "{\"title\": \"Follow-up on rebuttal and a kind reminder\", \"comment\": \"Dear Reviewer ia1D,\\n\\nWe would like to express our gratitude for your constructive suggestions and thoughtful reviews, which have proven invaluable in enhancing the quality of our paper. As a follow-up to our rebuttal, we would like to kindly remind you that the deadline for discussion closure is rapidly approaching. \\n\\nDuring this open response period, we aim to engage in discussions, address any further inquiries, and further enhance the overall quality of our paper. We would appreciate it if you could confirm whether you have had the opportunity to review our rebuttal, in which we made concerted efforts to address all of your concerns. It is of utmost importance to us that our responses have been thorough and persuasive. We have also attached tables summarizing the experimental results for diverse dataset selection for your convenience.\\n\\nShould you require any additional information or clarification, please do not hesitate to let us know. Thank you once again for your time and valuable consideration.\\n\\nBest regards,\\n\\nAuthors\\n\\n(Submission Number: 830)\\n\\n\\n| Remove 5% Wine Dataset | Distance (\\u00b1std) | Unlearning time (Sec) | Storage (MB) | Precomputing time (Sec) |\\n| ---------------------- | ---------------- | ---------------------- | ------------- | ----------------------- |\\n| **IJ** | 1.33 (\\u00b12.68) | 0.31 | 9.12 | 155.01 |\\n| **NU** | 0.71 (\\u00b10.66) | 7.55 | 9.12 | 153.16 |\\n| **Proposed** | **0.04 (\\u00b10.01)** | **0.0004** | **1.06** | **11.56** |\\n\\n| Remove 10% Wine Dataset | Distance (\\u00b1std) | Unlearning time (Sec) | Storage (MB) | Precomputing time (Sec) |\\n| ----------------------- | ---------------- | --------------------- | ------------ | ----------------------- |\\n| **IJ** | 0.65 (\\u00b10.48) | 0.31 | 9.12 | 155.01 |\\n| **NU** | 0.88 (\\u00b11.61) | 14.34 | 9.12 | 153.16 |\\n| **Proposed** | **0.09 (\\u00b10.02)** | **0.0004** | **1.06** | **11.56** |\\n\\n\\n\\n| Remove 5% Obesity Dataset | Distance (\\u00b1std) | Unlearning time (Sec) | Storage (MB) | Precomputing time (Sec) |\\n| ------------------------- | ---------------- | ---------------------- | ------------- | ----------------------- |\\n| **IJ** | 0.93 (\\u00b10.24) | 1.98 | 14.21 | 2,240.16 |\\n| **NU** | 0.91 (\\u00b10.21) | 106.50 | 14.21 | 2,228.83 |\\n| **Proposed** | **0.58 (\\u00b10.15)** | **0.0004** | **13.23** | **102.06** |\\n\\n| Remove 10% Obesity Dataset | Distance (\\u00b1std) | Unlearning time (Sec) | Storage (MB) | Precomputing time (Sec) |\\n| -------------------------- | ---------------- | ---------------------- | ------------- | ----------------------- |\\n| **IJ** | 1.18 (\\u00b10.19) | 1.98 | 14.21 | 2,240.16 |\\n| **NU** | 1.35 (\\u00b10.49) | 212.66 | 14.21 | 2,228.83 |\\n| **Proposed** | **0.83 (\\u00b10.11)** | **0.0004** | **13.23** | **102.06** |\"}", "{\"comment\": \"Thanks for the authors' detailed response. I am convinced by the response to W2 and I appreciate that the authors incorporate feedback from W1 into the paper.\\n\\nFor Q1, I would still argue that for convex functions, the fact that [R1] uses regularization as a technique shouldn't be interpreted as their work requires the strong convexity assumption; after all, their unlearning and generalization guarantees are stated for the original convex function before adding regularization. For Q2, I would still suggest that the authors use \\\"convex in w\\\" in the main body for the statement to be correct/ precise. \\n\\nSimilar to Reviewers t2wR and CNQV, I also miss the reasons why neither [R1] nor [R2] can be implemented using only Hession-vector products by using, for example, the conjugate gradient method. After learning how prior works _must_ be Hessian-based, I am now able to see the contributions underlying the Hessian-free property more clearly. I suggest the authors also highlight it in the paper, as this point is repeatedly missed.\"}", "{\"title\": \"Sincere Gratitude for the Correction from Reviewer CNQV\", \"comment\": \"Thanks a lot for Reviewer CNQV careful proofreading and for the time you have dedicated to our work. After considering your feedback, we realized that there was indeed an unintended error in the notation of the $\\\\mathbf{M} _ {e, b(u)}$ in Eq. 10. Specifically, we mistakenly formulated the recursive product $\\\\mathbf{M} _ {e, b(u)}$. We have made the necessary corrections to the related content (**L221**, and **L1075**, **L1180** in the appendix) marked in blue in the revised paper. This correction does not affect the other conclusions of our work. Below are the details of our revisions:\\n\\n> We removed the unnecessary and cumbersome product symbols. In the latest manuscript, we define the recollection matrix as $\\\\mathbf{M} _ {e,b(u)} := \\\\frac{\\\\eta _ {e,b(u)}}{|\\\\mathcal{B} _ {e,b(u)}|} \\\\hat{\\\\mathbf{H}} _ {E,B-1 \\\\rightarrow e,b(u)+1 }$, where $\\\\hat{\\\\mathbf{H}} _ {E,B-1 \\\\rightarrow e,b(u)+1 }= (\\\\mathbf{I}- {\\\\eta _ {E,B-1}}\\\\mathbf{H} _ {E,B-1})\\\\cdot(\\\\mathbf{I}- {\\\\eta _ {E,B-2}}\\\\mathbf{H} _ {E,B-2})...(\\\\mathbf{I}- {\\\\eta _ {e,b(u)+1}}\\\\mathbf{H} _ {e,b(u)+1})$ which represents the product of $\\\\mathbf{I}-\\\\eta \\\\mathbf{H}$ from $E$-th epoch's $B-1$-th update to $e$-th epoch's $b(u)+1$-th update. The improved notation of $\\\\mathbf{M} _ {e,b(u)}$ in the revised version now effectively and clearly conveys its meaning. \\n\\nWe sincerely appreciate your thorough proofreading once again, as well as the meaningful suggestions and questions you raised earlier. If there are any other questions, we would be more than glad to address any further requests from the reviewer.\"}", "{\"title\": \"Official Comment by Authors (1)\", \"comment\": \"We thank the reviewer for all of your valuable comments. We sincerely hope that this revised manuscript has addressed all your comments and suggestions. We are more than willing to provide further clarification or address any additional concerns.\\n\\nWe provide our responses to the comments, denoted by **[W]** for weaknesses, **[Q]** for questions, **[L]** for lines in our manuscript, and **[R]** for references. \\n\\n**W1:** *Theoretical guarantees hinge on assumptions of convexity.*\\n\\n>- Unlike previous works, the key distinction of our method lies in its ability to handle non-convex scenarios. This is achieved by avoiding the need for Hessian inversion, meaning that strong convexity is not required to ensure the positive definiteness of the Hessian. Therefore, our unlearning method and its theoretical guarantees do not rely on the assumption of convexity.\\n>- In deriving other performance analyses involving learning process conclusions, such as generalization guarantee, we inherit the convexity assumption from prior unlearning works. As noted in **Line 97** and **L308-L311**, we clarified in the initial manuscript that our proposed method and theoretical unlearning guarantees apply to both convex and non-convex settings, but do not extend to the theoretical analysis of the learning process.\\n>\\n>In summary, our contributions for non-convex assumption primarily focus on the unlearning domain. For the theoretical analysis involving the learning process conclusions, we have built upon previous unlearning work, adopting simple and rigorously assumed conclusions such as Lemma 9. As a future extension, we plan to provide non-convex theoretical conclusions for generalization guarantee in later versions or follow-up works.\\n\\n**W2:** *Authors should connect the ideas presented with a broader body of Hessian-free optimization work in classical parametric optimization.*\\n\\n>We appreciate the reviewer\\u2019s suggestion. Here, we briefly clarify the connection between our ideas and Hessian-free optimization work. Our unlearning method and previous Hessian-free optimization approaches are parallel lines of research, such as using conjugate gradient with HVP to perform sub-optimization. Different algorithms that use many of the same key principles have appeared in the literature of various communities under different names such as Newton-CG, CG-Steihaug, Newton-Lanczos, and Truncated Newton, as stated in **R11**. But in any case, the learning and unlearning processes are distinct from each other. The commonality lies in leveraging automatic differentiation tools to compute HVP, avoiding explicit computation of the Hessian, as this provides a numerically stable and precise way to compute the desired directional derivative.\\n\\n**W3:** *Lack diversity in dataset selection and only test the approach on ~5 datasets.*\\n\\n>- We greatly appreciate the valuable suggestion. In addition to MNIST, FMNIST, CIFAR10, CelebA, and LFW, we have conducted experiments on additional datasets for sufficient diversity in dataset selection, including HAPT, Adult, Wine, and Obesity. Our initial manuscript included digit, clothing, object, gender, and human face classification, and we further provided experiments on human activity recognition, income, wine quality, and estimation of obesity levels. These will demonstrate sufficient diversity in our dataset selection. The results show that we achieve good performance across different datasets, demonstrating the advantages of our algorithm. Please refer to the anonymous [repository link](https://github.com/Anonymous202401/If-Recollecting-were-Forgetting) provided in **L107** for detailed code and figures. \\n>- Besides, we would like to clarify our experimental contributions. Previous theoretical works **R1, R3** did not include experiments, and **R2** only conducted experiments with binary LR. Even the recent certified unlearning works **R4** and **R5** have experimented with convex models. The datasets used in these works were limited to binary MNIST and CIFAR10. While providing different theoretical insights, the extensive experiments led to space limitations in our writing. This is also why the reviewer raised the concern about the font size in the figure 1 being too small. We have followed the suggestion and increased the font size. However, for the additional experiments suggested by reviewer on the more datasets, we have to place these experiments in the appendix.\"}", "{\"title\": \"General Response\", \"comment\": \"Dear Senior Area Chairs, Area Chairs, and Reviewers,\\n\\nWe thank all reviewers for their constructive and valuable comments and appreciate the time spent on our manuscript. We have provided responses to the comments to fully address all reviewer concerns, denoted by **[W]** for weaknesses, **[Q]** for questions, **[L]** for lines in our manuscript, and **[R]** for references. We made minor adjustments to the manuscript and use **[C in L]** to denote the changes in lines in our revised manuscript. We have provided our additional experimental results in an anonymous GitHub repository, and the link is provided in **L107**. \\n\\nHere's a summary of our responses:\\n\\n**Reviewer kjAJ:**\\n\\n>- We adopt the reviewer\\u2019s suggestion to provide descriptions and references regarding complexity to avoid any potential misunderstandings in the revised version. **(C in L266, L380)**\\n>- We explain the analysis of generalization performance and the statement of the theorem in Section 4.2.\\n>- We clarify the misunderstanding regarding the analysis of optimization error in the performance analysis.\\n>- We adopt the reviewer\\u2019s suggestion and added the missing description in Lemma 9. **(C in L1407 of Appendix)**\\n>- We clarify the assumptions between previous works and our work, and provide a more detailed definitions of assumptions to avoid confusion. \\n\\n**Reviewer CNQV:**\\n\\n>- We correct the errors of notation in Matrix $\\\\mathbf{M} $ based on the reviewer\\u2019s comments. **(C in L221)**\\n>- We clarify that our work have discussed a limitations analysis and propose corresponding improvement methods.\\n>- We explain the reasons why HVP with some approximation techniques are not applicable to previous studies .\\n>- We explain the setup of the experiments and provide additional explanations in the revised version. **(C in L1031 of Appendix)**\\n\\n**Reviewer ia1D:**\\n\\n>- We strive to clarify that our non-convexity claims are only applicable to unlearning methods and their associated theoretical analysis, and we explain why one of our contributions is bridging the gap in non-convex scenarios for certified unlearning methods.\\n>- We clarify that the optimization methods used in the learning process are unrelated to our approach and explain the connection between existing Hessian-free optimization methods and our unlearning work.\\n>- We incorporate the reviewer\\u2019s suggestion for experiments on more datasets and provide additional clarifications regarding our experimental contributions. **(Additional experiments provided)**\\n>- We clarify some suggestions from the reviewer regarding definitions and explain why we do not modify these definitions.\\n>- Based on the reviewer\\u2019s suggestions, we modify the typos and inaccurate statements in the paper. **(C in L156, L289, L178, and L1562 of Appendix)**\\n\\n**Reviewer t2wR:**\\n\\n>- We explain the assumptions of our algorithm regarding generalization error and listed it as an direction for future work.\\n>- We explain the reasons why HVP with some approximation techniques for previous unlearning work are not applicable.\\n>- We clarify the trade-off between learning rate and both learning and unlearning, incorporating the reviewer\\u2019s suggestion for trade-off experiments. **(Additional experiments provided)**\\n\\n**References:**\\n\\n> [R1] Remember what you want to forget: Algorithms for machine unlearning. Sekhari et al., NeurIPS 2021.\\n>\\n> [R2] Algorithms that approximate data removal: New results and limitations. Suriyakumar et al., NeurIPS 2022.\\n>\\n> [R3] Certified minimax unlearning with generalization rates and deletion capacity. Liu et al., NeurIPS 2023.\\n>\\n> [R4] Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning. Chien et al., NeurIPS 2024.\\n>\\n> [R5] Certified Machine Unlearning via Noisy Stochastic Gradient Descent. Chien et al., NeurIPS 2024.\\n>\\n> [R6] How SGD selects the global minima in over-parameterized learning: A dynamical stability perspective. Wu et al., NeurIPS 2018.\\n>\\n> [R7] A loss curvature perspective on training instabilities of deep learning models. Gilmer et al., NeurIPS 2022.\\n>\\n> [R8] Fast Model Debias with Machine Unlearning. Chen et al., NeurIPS 2023.\\n>\\n> [R9] An Empirical Model of Large-Batch Training. McCandlish et al., NeurIPS 2023.\\n>\\n> [R10] A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay. Leslie N. Smith, ArXiv 2018.\\n>\\n> [R11] Training Deep and Recurrent Networks with Hessian-Free Optimization. James Martens and Ilya Sutskever. 2012.\\n\\n**Additional Revisions:**\\n\\n> 1. We shortened the title to allow for more detailed descriptions in Section 1.\\n>\\n> 2. We revised the imprecise expression in Appendix C.1.\"}", "{\"summary\": \"This paper proposes a Hessian-free machine unlearning algorithm. The authors theoretically analyze the approximation error for both convex and non-convex loss functions and prove the generalization theory for strongly convex loss functions. Extensive experiments demonstrate the efficiency of the proposed algorithm compared to other Hessian-based algorithms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors analyze the training trajectory and propose a machine unlearning algorithm, which is practical and innovative.\\n2. Compared to other Hessian-based algorithms, the proposed Hessian-free algorithm is efficient, especially for high-dimensional problems.\\n3. The authors conducted comprehensive experiments to validate the effectiveness of their proposed algorithm, and the experimental results are well presented.\", \"weaknesses\": \"Although the authors provide the approximation error analysis, there is no theoretical guarantee for the generalization performance of the unlearning model in the non-convex case.\", \"questions\": \"1. The authors propose using HVP to avoid directly calculating the Hessian matrix and reduce computational complexity, as discussed in Section 4.4. Could other algorithms discussed in Section 4.4 also benefit from HVP? For example, for IJ, $H^{-1}\\\\nabla \\\\ell $ can be approximately computed using $K$ steps of the conjugate gradient method, where each step HVP can be applied. Could this approach enable IJ to achieve lower complexity and experiment time, considering the entire process of precomputation and unlearning? In this case, how does the proposed algorithm compare to IJ?\\n2. The authors discuss in Appendix E that a small step size leads to a smaller approximation error. However, a small step size may result in insufficient model training. Could the authors further explain the trade-off?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We sincerely appreciate your support and constructive suggestions. We have made every effort to revise the manuscript based on your valuable suggestions.\\n\\nWe provide our responses to comments, denoted by **[W]** for weaknesses, **[Q]** for questions, **[L]** for lines in our manuscript, and **[R]** for references in the General Response. \\n\\n**W1:** *There is no theoretical guarantee for the generalization of unlearning model in the non-convex case.*\\n\\n>Our contributions for non-convex assumption primarily focus on unlearning domain. For theoretical analysis involving learning process conclusions, we inherit convexity assumption from prior unlearning works, adopting simple and rigorously assumed conclusions such as Lemma 9. \\n>\\n>We appreciate reviewer\\u2019s constructive comments. Given that unlearning community are starting to explore generalization in non-convex settings, as a future extension, we plan to provide non-convex theoretical conclusions in later versions or follow-up works.\\n\\n**Q1:** *Could other algorithms discussed in Section 4.4 benefit from HVP?*\\n\\n>We sincerely thank reviewer for their insightful questions and provide following clarifications.\\n>\\n>- The key reason is that, to handle deletion requests from unknown users, both NS and IJ require explicit pre-computing of Hessian and its inverse before an unlearning request arrives, which makes HVP impossible.\\n>\\n> Recall the technique details of NS and IJ.\\n>\\n> For NS, the update step is: $ \\\\frac{1}{n-m} { \\\\Big( \\\\frac{1}{n-m}\\\\sum_{i=1}^n \\\\nabla^2 \\\\ell (\\\\hat{\\\\mathbf{w}}; z_i) -\\\\sum_{j \\\\in U} \\\\nabla^2 \\\\ell (\\\\hat{\\\\mathbf{w}};u_j )\\\\Big) }^{-1}\\\\sum_{j \\\\in U} \\\\nabla \\\\ell (\\\\hat{\\\\mathbf{w}}; u_j).$\\n>\\n> For IJ, the update step is: $ \\\\frac{1}{n} {(\\\\frac{1}{n}\\\\sum_{i=1}^n \\\\nabla^2 \\\\ell (\\\\hat{\\\\mathbf{w}}; z _ i) ) }^{-1} \\\\nabla \\\\ell (\\\\hat{\\\\mathbf{w}}; u _ j).$\\n>\\n> However, for Hessian $\\\\sum_{j \\\\in U} \\\\nabla^2 \\\\ell (\\\\hat{\\\\mathbf{w}}; u _ j)$ in NS and gradient $\\\\nabla \\\\ell (\\\\hat{\\\\mathbf{w}}, u_j)$ in both NS and IJ, **(i)** forgetting sample $u _ j$ is unknown before deletion request arrives, and **(ii)** model $\\\\hat{\\\\mathbf{w}}$ will be deleted after processing a deletion request, while subsequent unlearned model is also unknown since forgetting sample $u_j$ is unpredictable. Therefore, explicit pre-computing is required for $\\\\sum_{i=1}^n \\\\nabla^2 \\\\ell (\\\\hat{\\\\mathbf{w}}; z_i)$ in NS and ${\\\\left( \\\\sum_{i=1}^n \\\\nabla^2 \\\\ell (\\\\hat{\\\\mathbf{w}}; z_i) \\\\right)}^{-1}$ in IJ.\\n>\\n> This is also a unique advantage of our method, which possesses additivity in Theorem 1 **(L245)**, enabling it to efficiently handle multiple deletion requests in online manner. We will make this point clearer in revised manuscript to better highlight our contributions.\\n>\\n>- In addition, inaccurate techniques (e.g., as mentioned by reviewer, it could be approximated by using methods like least squares) would also render theoretical bounds not as strong as techniques proposed in NS and IJ. These operations require previous works to re-derive bound, resulting in loss of original $\\\\mathcal{O}(m^2/n^2)$ approximation error advantage.\\n>\\n>Given the above reasons, approximate techniques are more suitable for non-privacy scenarios involving heuristic methods that do not require certified theoretical guarantees, such as bias removal based on influence function in **R8** .\\n\\n**Q2:** *Could authors further explain trade-off between smaller approximation error and insufficient model training caused by step size?*\\n\\n>We greatly appreciate the insightful questions, which has inspired us with new insights. Below, we provide explanations and present new experiments to support it.\\n>\\n>As we demonstrate in Appendix **L1806-L1820**:\\n>\\n>- Theoretical predictions **R6** and empirical validation **R7** suggest that successful training occurs only when optimization enters a stable region of parameter space, where $\\\\lambda _ 1 \\\\eta _ 1 < 2$ (with $\\\\lambda _ 1$ being largest eigenvalue of Hessian).\\n>- We also observed when step size is below certain threshold $\\\\eta _ {\\\\text{2}}$, increasing step size does not lead to unacceptable errors, e.g., in Table 7, increasing step size from 0.01 to 0.1 results in error increase of only 0.005. However, when step size exceeds 0.1 and increases to 0.3, error becomes uncontrollably large, increasing by 5.553.\\n>\\n>Combining the conclusions and observations, there exists a range on step size that ensures successful training during learning phase while preventing errors from escalating in unlearning phase. Our further experiments support this: maintaining a threshold that prevents fluctuations in error often allows for sufficient training (successful training as state in **R6**) at appropriate step size, as our method is less affected by step size when below a threshold, there is typically no noticeable tradeoff. For detailed code and results, please refer to anonymous [repository link](https://github.com/Anonymous202401/If-Recollecting-were-Forgetting) provided in **L107**.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We sincerely appreciate your support and constructive suggestions. We have made every effort to revise the manuscript based on your valuable and constructive suggestions.\\n\\nWe provide our responses to the comments, denoted by **[W]** for weaknesses, and **[Q]** for questions. The references, denoted by **[R]**, are provided in the General Response. And we use **[L]** to represent lines in our manuscript. \\n\\n**W1:** *The recollection matrix M looks incorrect.*\\n\\n>We thank the reviewer's comments, and would like to clarify that the recollection matrix $\\\\mathbf{M}$ is correct because its product requires multiplying $\\\\mathbf{I}-\\\\eta \\\\mathbf{H}$ of the most recent update with those of the past updates.., such as $(\\\\mathbf{I} - \\\\eta _ {E,B} \\\\mathbf{H} _ {E,B})(\\\\mathbf{I} - \\\\eta _ {E,B-1}\\\\mathbf{H} _ {E,B-1})...(\\\\mathbf{I} - \\\\eta _ {E,b(u)+1} \\\\mathbf{H} _ {E,b(u)+1})$. This means the multiplication should begin with the most recent update and continue backward through the previous gradient updates. Can we politely ask what makes the reviewer think that $\\\\mathbf{M}$ seems incorrect? We greatly appreciate the reviewer\\u2019s concern, and to avoid any misunderstandings, we have revised the manuscript and added the necessary explanation in Appendix C.1. \\n\\n**W2:** *Limitations of the algorithms are not discussed, and the proposed algorithm may not perform well on large-scale datasets.*\\n\\n>In Appendix B.1 of our initial submission, we provide a detailed description of the limitations of our algorithm and corresponding solutions, including the reviewer\\u2019s concern about large-scale datasets. For example, we can consider maintaining vectors for $k$ users instead of $n$ data points ($k \\\\ll n$), which would significantly reduce both computation time and storage, without relying on quadratic time complexity in the data size.\\n\\n**Q1:** *Why NS and IJ can't utilize HVP?*\\n\\n>We sincerely thank the reviewers for their insightful questions and have provided the following clarifications in response.\\n>\\n>- The key reason is that, to handle deletion requests from unknown users, both NS and IJ algorithms require explicit pre-computing of the Hessian and its inverse before an unlearning request arrives, which makes HVP impossible.\\n>\\n> Let us recall the technique details of NS and IJ.\\n>\\n> For NS, the update step is: $ \\\\frac{1}{n-m} { \\\\Big( \\\\frac{1}{n-m}\\\\sum_{i=1}^n \\\\nabla^2 \\\\ell (\\\\hat{\\\\mathbf{w}}; z_i) -\\\\sum_{j \\\\in U} \\\\nabla^2 \\\\ell (\\\\hat{\\\\mathbf{w}};u_j )\\\\Big) }^{-1}\\\\sum_{j \\\\in U} \\\\nabla \\\\ell (\\\\hat{\\\\mathbf{w}}; u_j).$\\n>\\n> For IJ, the update step is: $ \\\\frac{1}{n} { (\\\\frac{1}{n} \\\\sum_{i=1}^n \\\\nabla^2 \\\\ell (\\\\hat{\\\\mathbf{w}}; z _ i) ) }^{-1} \\\\nabla \\\\ell (\\\\hat{\\\\mathbf{w}}; u _ j).$\\n>\\n> However, for the Hessian $\\\\sum_{j \\\\in U} \\\\nabla^2 \\\\ell (\\\\hat{\\\\mathbf{w}}; u _ j)$ in NS and the gradient $\\\\nabla \\\\ell (\\\\hat{\\\\mathbf{w}}, u_j)$ in both NS and IJ, **(i)** the forgetting sample $u _ j$ is unknown before the deletion request arrives, and **(ii)** the model $\\\\hat{\\\\mathbf{w}}$ will be deleted after processing a deletion request, while the subsequent unlearned model is also unknown because the forgetting sample $u_j$ is unpredictable.\\n>\\n> This is also a unique advantage of our method, which possesses additivity in Theorem 1 **(L245)**, enabling it to efficiently handle multiple deletion requests in an online manner. We will make this point clearer in the revised manuscript to better highlight our contributions.\\n>\\n>- In addition, inaccurate techniques (e.g., as mentioned by the reviewer, it could be approximated by using methods like least squares) would also render the theoretical bounds of previous works not as strong as the techniques proposed in NS and IJ. These operations require the previous work to re-derive the upper bound, resulting in the loss of the original $\\\\mathcal{O}(m^2/n^2)$ approximation error advantage.\\n>\\n>Given the above reasons, these approximate techniques are more suitable for non-privacy scenarios involving heuristic methods that do not require certified theoretical guarantees, such as bias removal based on influence function in **R8** .\\n\\n**Q2:** *What stopping rule do you use while calculating those loss changes across different algorithms?*\\n\\n>We trained the model normally until the accuracy no longer increased significantly (e.g., when the model accuracy stabilized or started to decline), at which point we considered the model to have nearly converged, and the training needed to stop.\\n\\n**Q3:** *Whether fine-tuning was used for each of the algorithms in experiment?*\\n\\n>We only used fine-tuning in Figure 3 of Appendix B.4. All other experiments are based on Algorithm 1 without fine-tuning. We briefly introduce fine-tuning in Appendix B.4 because we believe our method can serve as a heuristic algorithm in non-privacy scenarios, where theoretical certified guarantees are not necessary.\"}", "{\"title\": \"Recollection matrix\", \"comment\": \"My understanding is that recursion over Eq. (35) yields the following:\\n$$\\n\\\\\\\\begin{array}{l}\\n\\\\quad \\\\mathbf{w}\\\\_{E,B}^{-u} - \\\\mathbf{w}\\\\_{E,B} \\\\newline \\\\approx \\n\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{E,b}\\\\mathbf{H}\\\\_{E,b}) (\\\\mathbf{w}\\\\_{E-1,B}^{-u} - \\\\mathbf{w}\\\\_{E-1,B}) +\\n\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{E,b}\\\\mathbf{H}\\\\_{E,b})\\\\frac{\\\\eta\\\\_{E,b(u)}}{|\\\\mathcal{B}\\\\_{E,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{E,b(u)};u) \\\\newline \\\\approx \\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{E,b}\\\\mathbf{H}\\\\_{E,b}) \\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{E-1,b}\\\\mathbf{H}\\\\_{E-1,b}) (\\\\mathbf{w}\\\\_{E-2,B}^{-u} - \\\\mathbf{w}\\\\_{E-2,B})\\\\newline\\\\quad+\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{E,b}\\\\mathbf{H}\\\\_{E,b})\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{E-1,b}\\\\mathbf{H}\\\\_{E-1,b})\\\\frac{\\\\eta\\\\_{E-1,b(u)}}{|\\\\mathcal{B}\\\\_{E-1,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{E-1,b(u)};u)\\\\newline\\\\quad+\\n\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{E,b}\\\\mathbf{H}\\\\_{E,b})\\\\frac{\\\\eta\\\\_{E,b(u)}}{|\\\\mathcal{B}\\\\_{E,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{E,b(u)};u)\\\\approx\\\\cdots\\\\newline\\\\approx\\\\prod\\\\_{e=E}^{1}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{e,b}\\\\mathbf{H}\\\\_{e,b}) (\\\\mathbf{w}\\\\_{0,B}^{-u} - \\\\mathbf{w}\\\\_{0,B})\\\\newline\\\\quad+\\\\prod\\\\_{e=E}^{2}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{e,b}\\\\mathbf{H}\\\\_{e,b})\\\\cdot\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{1,b}\\\\mathbf{H}\\\\_{1,b})\\\\frac{\\\\eta\\\\_{1,b(u)}}{|\\\\mathcal{B}\\\\_{1,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{1,b(u)};u)\\\\newline\\\\quad+\\\\prod\\\\_{e=E}^{3}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{e,b}\\\\mathbf{H}\\\\_{e,b})\\\\cdot\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{2,b}\\\\mathbf{H}\\\\_{2,b})\\\\frac{\\\\eta\\\\_{2,b(u)}}{|\\\\mathcal{B}\\\\_{2,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{2,b(u)};u)\\\\newline\\\\quad+\\\\cdots\\\\newline\\\\quad+\\\\prod\\\\_{e=E}^{E}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{e,b}\\\\mathbf{H}\\\\_{e,b})\\\\cdot\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{E-1,b}\\\\mathbf{H}\\\\_{E-1,b})\\\\frac{\\\\eta\\\\_{E-1,b(u)}}{|\\\\mathcal{B}\\\\_{E-1,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{E-1,b(u)};u)\\\\newline\\\\qquad\\\\qquad\\\\qquad\\\\qquad\\\\qquad\\\\qquad\\\\quad\\\\\\\\;+\\n\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{E,b}\\\\mathbf{H}\\\\_{E,b})\\\\frac{\\\\eta\\\\_{E,b(u)}}{|\\\\mathcal{B}\\\\_{E,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{E,b(u)};u)\\\\newline\\\\approx\\\\prod\\\\_{e=E}^{0}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{e,b}\\\\mathbf{H}\\\\_{e,b}) (\\\\mathbf{w}\\\\_{0,0}^{-u} - \\\\mathbf{w}\\\\_{0,0})\\\\newline\\\\quad+\\\\prod\\\\_{e=E}^{1}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{e,b}\\\\mathbf{H}\\\\_{e,b})\\\\cdot\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{0,b}\\\\mathbf{H}\\\\_{0,b})\\\\frac{\\\\eta\\\\_{0,b(u)}}{|\\\\mathcal{B}\\\\_{0,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{0,b(u)};u)\\\\newline\\\\quad+\\\\prod\\\\_{e=E}^{2}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{e,b}\\\\mathbf{H}\\\\_{e,b})\\\\cdot\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{1,b}\\\\mathbf{H}\\\\_{1,b})\\\\frac{\\\\eta\\\\_{1,b(u)}}{|\\\\mathcal{B}\\\\_{1,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{1,b(u)};u)\\\\newline\\\\quad+\\\\prod\\\\_{e=E}^{3}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{e,b}\\\\mathbf{H}\\\\_{e,b})\\\\cdot\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{2,b}\\\\mathbf{H}\\\\_{2,b})\\\\frac{\\\\eta\\\\_{2,b(u)}}{|\\\\mathcal{B}\\\\_{2,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{2,b(u)};u)\\\\newline\\\\quad+\\\\cdots\\\\newline\\\\quad+\\\\prod\\\\_{e=E}^{E}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{e,b}\\\\mathbf{H}\\\\_{e,b})\\\\cdot\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{E-1,b}\\\\mathbf{H}\\\\_{E-1,b})\\\\frac{\\\\eta\\\\_{E-1,b(u)}}{|\\\\mathcal{B}\\\\_{E-1,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{E-1,b(u)};u)\\\\newline\\\\qquad\\\\qquad\\\\qquad\\\\qquad\\\\qquad\\\\qquad\\\\quad\\\\\\\\;+\\n\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{E,b}\\\\mathbf{H}\\\\_{E,b})\\\\frac{\\\\eta\\\\_{E,b(u)}}{|\\\\mathcal{B}\\\\_{E,b(u)}|} \\\\nabla l(\\\\mathbf{w}\\\\_{E,b(u)};u).\\n\\\\\\\\end{array}\\n$$\\nThus, we have that\\n$$\\n\\\\mathbf{w}\\\\_{E,B}^{-u} - \\\\mathbf{w}\\\\_{E,B} \\\\approx \\\\sum\\\\_{e=1}^{E+1}\\\\mathbf{M}\\\\_{e,b(u)}\\\\nabla l(\\\\mathbf{w}\\\\_{e-1,b(u)};u),\\n$$\\nwhere\\n$$\\n\\\\mathbf{M}\\\\_{e,b(u)}=\\\\frac{\\\\eta\\\\_{e-1,b(u)}}{|\\\\mathcal{B}\\\\_{e-1,b(u)}|}\\\\prod\\\\_{k=E}^{e}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{k,b}\\\\mathbf{H}\\\\_{k,b})\\\\cdot\\\\prod\\\\_{b=B-1}^{b(u)+1}(\\\\mathbf{I}-\\\\eta\\\\_{e-1,b}\\\\mathbf{H}\\\\_{e-1,b})\\n$$ with \\n$\\\\prod\\\\_{k=E}^{e}\\\\prod\\\\_{b=B-1}^{0}(\\\\mathbf{I}-\\\\eta\\\\_{k,b}\\\\mathbf{H}\\\\_{k,b})=:\\\\mathbf{I}\\n$ for $e=E+1$.\\nThis is different from Eq.(36), if my derivation is correct.\"}", "{\"metareview\": \"The paper studies the problem of certified unlearning in machine learning scenarios. It proposes a \\\"Hessian-free\\\" approach, removing the need for explicit Hessian computation and inversion. As a result, the paper also removes the requirement that the objective function be strongly convex, and for the unlearning part can even handle nonconvex loss functions. For the learning portion of the results, the paper builds on prior work, which requires convexity. Overall, the paper presents interesting contributions in an important and growing area of machine learning.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal phase was quite productive, where the authors and reviewers engaged in a discussion. The authors addressed all the questions/concerns, revised the paper, and corrected some small issues identified in a subset of the reviews. As a result, all reviews converged towards acceptance.\"}" ] }
C33p2CNOQ8
Training the Untrainable: Introducing Inductive Bias via Representational Alignment
[ "Vighnesh Subramaniam", "David Mayo", "Colin Conwell", "Tomaso A Poggio", "Boris Katz", "Brian Cheung", "Andrei Barbu" ]
We demonstrate that architectures which traditionally are considered to be ill-suited for a task can be trained using inductive biases from another architecture. Networks are considered untrainable when they overfit, underfit, or converge to poor results even when tuning their hyperparameters. For example, plain fully connected networks overfit on object recognition while deep convolutional networks without residual connections underfit. The traditional answer is to change the architecture to impose some inductive bias, although what that bias is, is unknown. We introduce guidance, where a guide network guides a target network using a neural distance function. The target is optimized to perform well and to match its internal representations, layer-by-layer, to those of the guide; the guide is unchanged. If the guide is trained, this transfers over part of the architectural prior and knowledge of the guide to the target. If the guide is untrained, this transfers over only part of the architectural prior of the guide. In this manner, we can investigate what kinds of priors different architectures place on a fully connected network. We demonstrate that this method overcomes the immediate overfitting of fully connected networks on vision tasks, makes plain CNNs competitive to ResNets, closes much of the gap between plain vanilla RNNs and Transformers, and can even help Transformers learn tasks which RNNs can perform more easily. We also discover evidence that better initializations of fully connected networks likely exist to avoid overfitting. Our method provides a mathematical tool to investigate priors and architectures, and in the long term, may demystify the dark art of architecture creation, even perhaps turning architectures into a continuous optimizable parameter of the network.
[ "Representational alignment", "neural network optimization" ]
Reject
https://openreview.net/pdf?id=C33p2CNOQ8
https://openreview.net/forum?id=C33p2CNOQ8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ypu5u9gem9", "xoIv3ARnX6", "um4G8Om0hZ", "sVKED93s9d", "oLoqBhqCIg", "nZH4Ga9hgC", "mF05zkg8gp", "lXp5QiWmSM", "cJRRmBRymH", "YUE82KoosS", "XlsKxPG1VE", "XfT0EXj3uR", "W2n6qB02rc", "TftlrBbm0k", "TXRVsLz4W6", "TBVvZeSoYk", "T6Cdo7KHhh", "So03SxHEzf", "RKVPRqwco3", "O10lqfftEC", "JKdTG5DNZW", "EdCIluAK3f", "CyCGc7D24y", "CpD7WJbfhg", "ByFRoxrNF4", "4D6gxFeO0h", "3UmWGJv7LD" ], "note_type": [ "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1730636155422, 1732280379734, 1734462659842, 1732573673101, 1732534239805, 1729644780411, 1732280488654, 1730660671135, 1732281336189, 1732280668092, 1730823154836, 1732983313648, 1732489022001, 1732983361840, 1732645719108, 1733071815670, 1732464921364, 1732281129417, 1732280880084, 1732464982605, 1732281179456, 1732464797532, 1732281286369, 1732280986117, 1732464847688, 1732280745083, 1737523477258 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1961/Reviewer_Kh2H" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Area_Chair_LAHE" ], [ "ICLR.cc/2025/Conference/Submission1961/Reviewer_seP7" ], [ "ICLR.cc/2025/Conference/Submission1961/Reviewer_Kh2H" ], [ "ICLR.cc/2025/Conference/Submission1961/Reviewer_seP7" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Reviewer_vZFS" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Reviewer_rthw" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Reviewer_vZFS" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Reviewer_rthw" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Submission1961/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This work presents a simple method for training neural networks that are typically considered untrainable. The approach, similar to the teacher-student methodology, involves training a \\\"target\\\" network with the assistance of a \\\"guide\\\" network, which can be either trained or untrained and may have a completely different architecture from the target. The objective is to align the target network's internal representations as closely as possible with the guide ones in terms of CKA similarity.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper studies an interesting problem trying to solve the problem of untrainable networks\", \"The authors analyzed multiple scenarios with different tasks and architectures\", \"The authors provide the supplementary materials with demo notebooks\"], \"weaknesses\": \"**Writing and Presentation**: The paper could benefit from improved clarity and readability. Some suggestions to consider:\\n\\n* The paper includes expressions that may introduce ambiguity or complexity (e.g. lines 30,42) potentially making it harder for readers to grasp the authors' points fully. Rephrasing these expressions with clearer language would better convey the challenges discussed and enhance accessibility.\\n\\n* The tables are somewhat difficult to interpret, and it\\u2019s unclear if certain entries serve as baselines. For example, Table 2 could be restructured to distinguish between guide and target, rather than using the \\\"experiment\\\" column, and highlight when training is performed without teacher guidance. Making tables and feature descriptions more self-contained and better explained would enhance clarity.\\n\\n* The Method section could be more clearly organized and written. It is challenging to follow, and restructuring or rephrasing certain parts would likely improve reader comprehension.\\n\\n* In the Related Work section, the \\\"representation similarity\\\" literature is missing (e.g. 2,4-8). Adding this section could support several claims, such as the one on line 145. \\n\\n**Novelty**: The paper\\u2019s novelty is limited, as it overlooks relevant related work that addresses similar challenges. For example:\\n\\n* A similar work exists, such as [1], where authors leveraged relative representations [2] in a teacher-student framework, training the student network to mirror the teacher network\\u2019s latent representations.\\n\\n* The authors\\u2019 claim in the conclusion regarding the absence of known methods for improved network initialization is not entirely accurate. For instance, [3] proposed a method for initializing smaller models by selecting subsets of weights from a larger, pretrained model, thereby transferring knowledge from the larger model to smaller architectures.\\n\\n* Additionally, the experiments lack a comparison with the classic teacher-student setting, which would provide a useful benchmark.\\n\\n**Contribution**: While the results show some improvement, they also highlight ongoing challenges in making these networks fully trainable. For example:\\n\\n* In Table 2, the results show an accuracy improvement from 7.5 to 13.10. However, no standard deviation is provided, which limits the interpretation of these results. Moreover, an accuracy of 13.10 is not competitive on this dataset for the image classification task.\\n\\n* The choice of the networks raises questions, as they do not represent state-of-the-art (SOTA) architectures. It would be interesting to explore whether using a pre-trained network to guide another competitive network could yield further improvements. \\n\\n--- \\n\\n[1] Ramos, Patrick, Raphael Alampay, and Patricia Abu. \\\"Knowledge Distillation with Relative Representations for Image Representation Learning.\\\" International Conference on Computer Recognition Systems. Cham: Springer Nature Switzerland, 2023.\\n[2] Moschella, Luca, et al. \\\"Relative representations enable zero-shot latent space communication.\\\" ICLR 2022.\\n[3] Xu, Zhiqiu, et al. \\\"Initializing models with larger ones.\\\" ICLR 2023.\\n[4] Huh, Minyoung, et al. \\\"The platonic representation hypothesis.\\\" ICML 2024.\\n[5] Thao Nguyen, Maithra Raghu, and Simon Kornblith. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. arXiv preprint arXiv:2010.15327, 2020.\\n[6] Shengkun Tang, Yaqing Wang, et al. You need multiple exiting: Dynamic early exiting for accelerating unified vision language model. CVPR 2023.\\n[7] Zorah Lahner and Michael Moeller. On the direct alignment of latent spaces. In UniReps 2023.\\n[8] Valentino Maiorca, Luca Moschella, et al. Latent space translation via semantic alignment. NeurIPS 2023.\", \"questions\": [\"Are the trained guide networks pretrained models that are then fine-tuned during the guidance process?\", \"Are the untrained guide networks trained in parallel to the target network, or are they frozen during training? If they are not frozen, what is the advantage of training both networks simultaneously rather than focusing solely on training the guide network?\", \"In Figure 1, when mapping different layers of the guide network to the same layer of the target, it may be important to consider the similarity between these different guide layers.\", \"How might the results differ if an alternative metric is used to calculate the similarity?\", \"What impact would guiding only the final layers of the target network have on the results?\", \"In Table 3, the RNN achieves 100% accuracy and is used as a guide network. What does this imply? Given that the work aims to train untrainable networks using well-performing networks, why use a network that appears to be overfitting as a guide for training the target network?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response (Part 1)\", \"comment\": \"We thank the reviewers for their constructive reviews. We are glad reviewers found the problem and observations in the paper interesting (rthw, vZFS), the potential for guidance to be useful for studying architecture (seP7, vZFS), and appreciated the multiple scenarios and experiments through which guidance was applied (Kh2H, seP7, vZFS). We address some of the common feedback here as well in the individual responses.\\n\\n* Novelty and Positioning (rthw, Kh2H, seP7): We list a number of novel observations that have not been reported in the literature before. We will make these clear in our manuscript as well.\\n1. For the first time, **we transfer over the architectural prior of one network architecture to another network architecture**; not the knowledge of one network to another, independent of any data. An untrained (randomly initialized) CNN that has never seen an image, that never learns because it is never updated, in a setting where no images are shown to either network, only randomly (uniformly) sampled matrices of the correct format, can give its architectural prior to a another network of another architecture. That other architecture, which was otherwise poorly regularized and doomed to overfit, now does not. \\n2. For the time first, **this shows that fully connected networks can be systematically initialized in a novel way guided by another network architecture to make them avoid overfitting**. No changes to the architecture needed. We provide this initialization method, although, we leave explaining it for future work.\\n3. This casts doubt on one of the most common ideas of what architecture does in ML. It is not the fact that CNNs or Transformers add an architectural prior by way of convolutions, hierarchy, and/or attention, that regularizes them and allows them to work tasks like for object recognition. This story is pervasive in the current ML literature, appearing in some of the most eminent publications like LeCun, Y., Bengio, Y., & Hinton, G. (2015). \\\"Deep learning.\\\" Nature, 521(7553), 436-444. The consequence is that it is assumed that certain architectures are vastly superior to others. For example, Transformers are, with minor exceptions, superior to RNNs. We demonstrate that this relationship isn\\u2019t so clear cut: RNNs can help Transformers and vice versa.\\n4. We can change how a significant subset of ML is done. Usually we hope to find better initialization methods, regularization methods, or optimizers, without any idea if one exists, what it might look like, what a feasible trajectory looks like, or what the final trained network is. We turn this walk into the darkness into a much more systematic search. We provide networks that have initializations that work which we don\\u2019t understand, that have regularizations that work, that have training trajectories or final variants that perform well, but that we do not understand. We turn a stab in the dark into a systematic reverse engineering effort.\\n5. Guidance is a new form of probing. We can now for the first time systematically ask what is the relationship between two architectures? Is what relates them the statistics of the eigenvalues of their activations? Sparsity? Receptive fields? Etc. Any question can be trivially turned into a new similarity metric, plugged into guidance, and systematically tested. We can now systematically ask, what is the relationship between the prior a CNN imposes compared to that of a Transformer? For example, we can repeat our Deep FCN results with a Transformer rather than a CNN. Then ask how are the two resulting networks different from one another, both in their activation patterns and their behavior? There are countless publications claiming to reproduce Transformer results with CNNs and vice versa. We can help establish the relationship between the two.\\nSome reviewers asked questions related to 5. For example, is the relationship between FCNs with a trained and untrained guide that one has a different effective rank than another? This is an interesting question, and would provide an easy explanation as well as an easy method to regularize FCNs. In appendix J we show that unfortunately things are not so simple. Effective rank and intrinsic dimensionality does not explain the results and is not effective.\"}", "{\"metareview\": \"(a) summary\\n\\nThis paper investigates how to use a guide network to train a target one traditionally overfitting(FCN) or underfitting(CNN). It performs representation alignment between the guide and target DNN by introducing additional loss terms. The initial results suggest that guidance can improve performance in various settings.\\n\\n(b) strengths\\n+ The observation that the performance of a student model can be improved if their intermediate representations are matched to a teacher model, is interesting and is novel to some degree.\\n+ It investigates the role of network architecture inductive bias for bettering understanding of neural networks.\\n+ It conducts extensive experiments over a wide range of tasks and analysis.\\n\\n(c) weaknesses\\n- The proposed method has limited novelty due to its similarity to teacher-student setting. There are a few papers that are conceptually very similar to the proposed ideas in this paper but were not cited\", \"https\": \"//arxiv.org/abs/1808.01405\\n- It is not easy to verify the correctness of the claims due to some missing baselines (distillation and auxiliary losses tied to intermediate layers).\\n- It lacks concrete insights into architectural inductive biases based on the proposed approach.\\n- The presentation is not polished.\\n\\n(d) decision\\n\\nAlthough the paper has potential for better understanding inductive bias of DNN, it is not ready for publication in its current form due to its limited novelty, missing experiments, and unpolished presentation. Please keep the reviewers' comments in mind when preparing a future version of the manuscript.\", \"additional_comments_on_reviewer_discussion\": \"This paper has received diverging reviews ranging from 3 to 8. The reviewers (rthw, vZFS) agree that the problem and observations in the paper are interesting, and the proposed method has the potential for studying DNN architecture inductive bias(seP7, vZFS). Some common concerns (rthw, Kh2H, seP7) on the limited novelty, missing experiments, and the unpolished presentation remain after the rebuttal. Another round of paper revision and review are suggested to make the paper stronger.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for the additional comparisons to other similarity metrics and for the clarifying responses. My concerns regarding the ubiquity of transformers today potentially limiting the impact of this investigation and marginal gains on ImageNet remain. While I do agree with reviewer rthw, the work would benefit from more concrete insights based on the proposed approach, I believe overall the approach proposed is of scientific interest for studying architectural inductive biases. Based on this view and taking into the account the revised draft and authors' comment, I maintain my recommendation to accept the paper leaving my score at an 8.\"}", "{\"comment\": \"I sincerely appreciate the author\\u2019s response and the additional clarifications provided. While I understand that the aim of this work is to address training challenges in networks with known limitations, I still find it challenging to grasp the broader practical implications of the work. Specifically, the overall performance of these networks remains a concern. I value the clarifications shared, and I have adjusted my score to reflect this.\"}", "{\"summary\": \"The authors propose an approach for aligning representations across two different deep learning architectures. One architecture serves as the \\\"teacher\\\" (providing guidance) to the \\\"student\\\" (target network). For example, one architecture could a convolution whereas the other could be a transformer.\\n\\nSpecifically the authors propose training the target network with an additional loss term penalizing the dissimilarity between the target and fixed guidance network. The authors measure the layer-wise dissimilarity using the complement of linear CKA similarity. \\n\\nThe authors study four tasks (across vision, language, and arithmetic) to illustrate how their approach can improve performance by guiding an ill-suited architecture for the task with a more suitable architecture for the task using their alignment approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed idea is intuitively presented and I believe could serve as a useful tool for studying network initialization schemes as well as scientifically probing the role of architectural inductive biases.\", \"I appreciate the authors investigating how to bridge our understanding of the role of architecture by providing a principled mechanism for transferring (some) of the architectural biases of one network architecture to another.\", \"Empirically the authors selected a diverse set of tasks with reasonable baselines and choice of target versus guide networks. I appreciate how thorough the authors are disclosing training details, hyperparameters, and the intent of the experiments. I also appreciate the error analysis conducted across architectures and the error bounds for experimental results.\", \"Authors draw a distinction between typical distillation approaches and the proposed guidance approach that I find clear and well-motivated.\"], \"weaknesses\": [\"The field has mostly converged on the transformer architecture for most tasks. While this doesn't make the idea presented less interesting scientifically, I'd encourage the authors keep this in mind when motivating the approach as today, practitioners are not frequently choosing across many choices of architectures. I find initialization and the scientific value of understanding the role of architectural bias still valuable and would encourage the authors emphasize these angles instead.\", \"The gains for ImageNet are still quite small overall, suggesting we perhaps can't simply transfer over inductive biases or that much much more tuning is needed. Do the authors have any comments on this point? At a minimum, I'd recommend the authors acknowledge and discuss this in the results section.\", \"Inconsistent gains across trainable versus untrainable without clear intuition as to why this might be happening. I appreciate the author's attempt at clarifying this in the paragraph starting on line 373, but I still feel this section is missing crucial intuition to help readers make sense for the empirical findings across the tables. For example, why would an untrained ResNet-18 provide better guidance relative to a trained ResNet-18 to a Deep FCN as shown in Table 2. This is puzzling to me and am curious if the authors have additional experiments or intuition to better explain this in the context of the rest of the results.\", \"The authors make several claims about the revolutionary potential of the method that are not well-supported with evidence. While I'm also excited about the proposed method and potential future work that builds on it, I'd suggest the authors downtone what right now appear as spectulations in this draft. For example, lines 85-86 claim the proposed approach \\\"expands the space of viable networks,\\\" a claim I believe is not well-supported by this work. Similar claims are made about potential future promises about guidance on lines 108-112 that are not well-supported by the experiments in the work.\", \"Section 4 could be improved. You emphasize a distinction between untrained architectures (versus untrained tasks)\\u2014defining Untrainable Architectures as target networks difficult to train irrespective of task. In the experiments that follow however, the focus is very much on whether a given target architecture is ill-suited for a specific task.\"], \"questions\": [\"It's not crucial to the reception of the paper, but I'm curious if the authors explored the evolution of the CKA similarity across layers comparing early versus later layers throughout guidance akin to [https://proceedings.neurips.cc/paper_files/paper/2014/file/375c71349b295fbe2dcdca9206f20a06-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2014/file/375c71349b295fbe2dcdca9206f20a06-Paper.pdf)\", \"I'm also curious whether the authors explored any other similarity metrics besides linear CKA.\", \"Perhaps I missed it, but where are the experiments with guidance between architectures with a different number of layers described on lines 230-235?\", \"Did the authors consider weighing the two loss terms proposed in equation 1?\", \"Not necessarily a weakness, but I'm curious whether the authors explored transferring equivariant architectures for tasks with known symmetries.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response (Part 2)\", \"comment\": [\"Relationship to Distillation (rthw, Kh2H): Note that guidance is not distillation. In distillation you ideally want to completely reproduce the network that is being distilled into another network. That would defeat the experiments we report where an untrained guide which is never updated at any point helps another network learn. What can an untrained network with near-zero performance teach another network? With guidance, this question is not just meaningful, but essential.\", \"Distillation ideally produces an exact copy of a network into another, often a smaller one, generally of the same architecture. Guidance ideally copies over a limited aspect of one network to another, aligning either the general statistics of the architectural prior of a network and/or the knowledge a network has. It does not directly ask that the representations should be the same. We can control what the relationship between the networks should be with the similarity metric.\", \"To this end, two reviewers asked for distillation comparisons. We have included the results of this experiment in Appendix I. Distillation does not help when using randomly initialized networks and underperforms guidance when using a trained network.\", \"Untrained guide network intuition (rthw, seP7): A few reviewers asked for intuition on why untrained guide networks can possibly improve results. How can a network that knows nothing and never learns anything improve another network? This underscores why guidance is not distillation. By the logic of distillation this is nonsense. From the point of view of guidance, we are transferring over the architectural prior of one network to another. For example, a CNN is organized hierarchically and has receptive fields. An FCN does not. Even an untrained CNN has receptive fields which can be observed in its activations, even in response to randomized inputs. This can be transferred to the FCN in principle. Guidance allows you to pick your similarity metric to probe these intuitions systematically. For example, we could measure receptive fields and how hierarchical the activity of a network is, and provide that as the guidance function. If the CNN still regularizes the FCN, this demonstrates that hierarchy and receptive fields are sufficient to explain why FCNs fail. If it does not, this demonstrates that hierarchy is not the key component that regularizes FCNs, it is some other attribute of CNNs. Our goal here was to show that it is possible to carry out such experiments and introduce the tooling to do so. In future work, we and others can systematically explore these kinds of explanations.\", \"Initialization (rthw, Kh2H): We update the manuscript to improve the initialization results for FCNs. Now, an untrained CNN, which is never updated, guides an FCN on randomized images (uniform noise; we previously used real images). Then, the two are disconnected after 150 update steps. The resulting FCN is now regularized and does not overfit when trained on ImageNet, see Appendix K. Nothing about image statistics, or the knowledge of a CNN, is regularizing the FCN, because neither exists, it is the mere architectural prior of the CNN that regularizes the FCN. Future work should be able to reverse engineer this method.\", \"Other representation metrics (seP7, Kh2H): A few reviewers asked for additional metrics with guidance. We show two additional metrics, Representation Similarity Analysis (RSA) and ridge regression in Appendix Section L. This also speaks to the geometry of the problem, as each makes different assumptions about the types of degrees of freedom when matching representations. Each works, but to differing degrees. Systematically understanding those differences is future work.\", \"Paper Changes: We summarize paper changes here. All changes have been made in blue\", \"Section 2: We have shortened the related work and added citations as requested by reviewers\", \"Section 4: We have added some examples for untrainable architectures and untrainable tasks.\", \"Section 6: we have added some sentences to the conclusion to emphasize the success of guidance to study inductive biases.\", \"Appendix: We have added Appendix Sections F.1, G.1, I, J, K, L, M, and N to address prior comments.\"]}", "{\"summary\": \"This paper proposes a novel distillation technique called guiding which distils the inductive priors of the 'guide' network into the target network. The paper shows that by using a known task strong network as the guide, a target network which is known to be weak at the specific task can be significantly improved in terms of performance. The paper then proposes that the guiding network can be disconnected from the process at a very early stage so that it simply acts as a initialisation technique for the network.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Paper is very well written and even enjoyable to read.\", \"Technique is novel and proposes a new direction in which the field of distillation can go.\", \"Technique could revitalise 'dead' neural architectures which failed to take off due to weak performance.\", \"Experimentation on multiple modalities.\"], \"weaknesses\": [\"I would have liked to see results on smaller datasets. I feel that this could have resulted in a higher number of ablations to determine different configurations of the network e.g. should we be connecting multiple layers to a single layer in the target? Should guide network have the same number of layers as the target? Do all layers in the target need connections to the guide?\", \"I would have liked to see more in depth discussion of the ViT to CNN/MLP experiments which are hinted at in figure 3.\"], \"questions\": \"I am interested in discussion on both of the weaknesses that I have proposed. Overall I am positive on this paper, but I feel that if the above weaknesses were addressed, it would improve the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewer seP7 Response (Part 2)\", \"comment\": \"> Q1: but I'm curious if the authors explored the evolution of the CKA similarity across layers comparing early versus later layers throughout guidance akin to\\n\\nThis is a great question and something we have included in an appendix subsection F.1 of the paper. We find that CKA optimizes more quickly in early layers rather than later layers. \\n\\n> Q2: I'm also curious whether the authors explored any other similarity metrics besides linear CKA.\\n\\nYes we have! We include RSA [1] and ridge regression results in appendix section L. We find that RSA has similar performance to CKA, when using randomly initialized guide networks but improves dramatically when using trained guide networks. More excitingly, ridge regression leads to further improvements. This finding is intuitive; RSA and ridge regression have more degrees of freedom than CKA, due to fewer invariances. This means that fitting can be improved with less strict similarity functions. See [2] for an overview of other similarity metrics. There are many and we can incorporate any of them as long as they are differentiable! \\n\\n\\n| Metric \\t| ImageNet Validation Accuracy |\\n|----------------------------|------------------------------|\\n| Trained ResNet-18; CKA \\t| 7.50 \\t|\\n| Trained ResNet-18; RSA \\t| 11.02 \\t|\\n| Trained ResNet-18; Ridge | 9.46 \\t|\\n| Untrained ResNet-18; CKA | 13.50 \\t|\\n| Untrained ResNet-18; RSA | 11.74 \\t|\\n| Untrained ResNet-18; Ridge | 15.69 \\t|\\n\\n> Q3: Perhaps I missed it, but where are the experiments with guidance between architectures with a different number of layers described on lines 230-235?\\n\\nMost architectures we tested had a different number of layers. For instance, the example in Figure 1 has a different number of layers between the two networks. The only setting that had the same number of layers between the guide network and target network was the Deep ConvNet guided by ResNet-50. The Deep ConvNet was the equivalent of a ResNet-50 with residual connections removed.\\n\\n> Q4: Did the authors consider weighing the two loss terms proposed in equation 1?\\n\\nWe did but eventually came to the conclusion that an equal weighting of both loss terms was sufficient. There were some design choices that could be made. For example, we could have introduced a tunable parameter to the representational dissimilarity term. In practice, we found that some networks reduce the parameter to 0 to eliminate the contribution of the similarity. When setting the parameter manually, we found that a constant factor of 1 worked well for initial experiments with image classification and sequence modeling results. Larger values seem to make the task loss saturate or stall. Smaller values reduced the improvement from providing representational alignment. In practice, we could add this as a hyperparameter that can be tuned. \\n\\n> Q5: Not necessarily a weakness, but I'm curious whether the authors explored transferring equivariant architectures for tasks with known symmetries.\\n\\nWe haven\\u2019t but this is one of many architectures that are on our list for future work. It would be amazing to use equivariant architectures to directly check for the inductive biases. Thank you for this suggestion.\\n\\n[1] Kriegeskorte et. al. Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in systems neuroscience (2008)\\n\\n[2] Klabunde et. al. Similarity of neural network models: A survey of functional and representational measures. arXiv 2023.\"}", "{\"comment\": \"We thank the reviewer for their constructive review. We are glad the reviewer found the paper observations in the paper interesting and novel and generally found the paper well-written. We address the key points here by breaking down some of the weaknesses.\\n\\n> W1: interesting observation but somewhat a natural outcome of what we already know\\u2026 Suggesting that what skip connections in one way do is to allow many parallel pathways inside the model for letting the gradients flow throughout. I don't see how what is proposed here is fundamentally different from what was show in the prior literature.\\n\\nWe agree that some of the findings in our paper align well with findings in prior work. We believe that strengthens our work! We now have additional experimentation to verify prior findings. However, we also would disagree that all our findings are entirely explained by targeting earlier layers in the network or improving gradient flow. This could be a reasonable explanation of the Deep FCN and Deep ConvNet results but we would argue this would not work with the Shallow FCN or with the transformer guided for the parity task, which are not deep enough to have gradient flow issues. RNN results also are not necessarily attributed to gradient flow: copy-paste and language modeling have limited performance on RNNs due to problems with incorporating memory. Fixing gradient flow may prevent vanishing/exploding gradients but won\\u2019t make an RNN better at incorporating memory. Guidance is more general than the approaches that the reviewer references.\\n \\nMore importantly, guidance is useful for testing a wide variety of hypotheses about how we overcome barriers for trainability. As we reference in the paper, a unified theory of skip connections is still unclear [1, 2] despite the work you reference. There could be other components of the convolutional architecture that are preventing overfitting in a Deep FCN. And, we find the distinction between the success of a trained guide and randomly initialized guide striking for the Deep ConvNet result.\\n\\n> W2: distillation is a critical missing baseline in all the results.\\n\\nWe include a distillation baseline in this case using the technique in [3]. We include the accuracies here and include a loss curve in Appendix Section I Figure 8. The distillation baseline with a trained network is worse than guidance, getting an accuracy of 3.45%. And, the baseline with a randomly initialized ResNet-18 hurts performance, with an accuracy of 1.41%. Guidance significantly improves over distillation and works with randomly initialized networks as guide networks. \\n\\n> W3: in the context of this paper which uses network representations as guide, the informative measure would be the accuracy of a linear classifier trained on the penultimate layer features.\\n\\nThis is a fair point. It\\u2019s very likely that there are features that are linearly decodable in closer to penultimate layers. We train a linear decoder for the randomly initialized ResNets to test their object detection performance. The linear decoder is trained on 4000 ImageNet images and tested on 1000 ImageNet images. In general, this linear decoder has above-chance performance on all layers of the model, although the accuracy is generally below 1%. We show these results in Appendix Section N. In general, a question remains about what features are present in the randomly initialized ResNet that are improving results and whether the features that are useful for linear decodability are accessible by CKA to improve the target network. These are interesting questions because they point to the potential for universal priors that are architecturally agnostic. We hope this highlights the potential for guidance to answer these questions and opens up many lines of future work.\", \"title\": \"Reviewer rthw Response (Part 1)\"}", "{\"summary\": \"This paper proposes guidance, a method where a well-performing guide network directs the layer-wise representations of a target network, transferring inductive biases without modifying the architecture. The technique aims to improve training for architectures traditionally prone to issues like overfitting or underfitting, such as fully connected networks and plain CNNs. Initial results suggest that guidance can improve performance in various settings, but further validation across tasks and more rigorous testing would clarify its robustness and broader applicability.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The observations are interesting and to some degree novel.\", \"A wide range of tasks from several domains are tested empirically.\", \"The paper is well written and methods are explained comprehensively.\"], \"weaknesses\": \"1. The main observation of the paper is that the performance of suboptimal architectures can be improved if their intermediate representations are matched to a more optimal guide models. This is an interesting observation but somewhat a natural outcome of what we already know.\\n\\t\\n\\tFunctionally, representation matching to a guide network allows deeper layers of the target networks to receive useful gradients for learning that they normally do not receive from the task loss. We know that in deep networks w/o skip connections, auxiliary loss functions applied on intermediate layers help with learning [1]. These losses were proposed to mediate the very same issue examined here, that deep CNNs suffer from the vanishing gradient. Relatedly, models with skip connections like ResNets can also be viewed as ensemble of shallow networks as skip connections allow gradient to pass through the deep layers [2]. Suggesting that what skip connections in one way do is to allow many parallel pathways inside the model for letting the gradients flow throughout. I don't see how what is proposed here is fundamentally different from what was show in the prior literature. One way to make this claim more grounded is to show that representation matching outdoes auxiliary losses applied on earlier layers. In general, distillation is a critical missing baseline in all the results. \\n\\t\\n\\t[1] Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. \\\"Going deeper with convolutions.\\\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9. 2015.\\n\\t\\n\\t[2] Veit, Andreas, Michael J. Wilber, and Serge Belongie. \\\"Residual networks behave like ensembles of relatively shallow networks.\\\" Advances in neural information processing systems 29 (2016).\\n\\n\\n2. A second primary observation in the paper is that even when the representation is matched to that of an untrained guide network it leads to substantial improvements in the performance of the target network. While this is an interesting observation, specially in cases where this scenario surpasses the matching of the trained network, the portrayal of the results are inaccurate. The main issue lies in considering the untrained model as incapable of performing the task better than chance. In Table 2, the accuracy of the untrained models are reported as being consistently around chance, which is expected if none of the parameters are trained. However, in the context of this paper which uses network representations as guide, the informative measure would be the accuracy of a linear classifier trained on the penultimate layer features. This value is typically much higher than chance and I expect it to be so here as well which attests to the usefulness of the features at their initial state. \\nApart from this, there are more things to be done in order to at least attempt to offer a plausible explanation of this phenomenon. For example, what are the geometrical differences as a result of matching to the untrained and trained networks? Presumably, since the trained model is better able to distinguish between imagenet classes, matching its representations, if successful, should have been more useful for the target network. But somehow that's not true. \\n\\n3. The observed higher utility of the untrained guide network becomes more interesting specially when considering the fact that the matching loss term is included throughout the training. One possible explanation could be that the untrained network geometry is simpler to replicate for the target network while the fully trained model could have a much more clustered representation that is too difficult for the target to learn from. This could be tested by using various guide networks at different stages of training and tracking performance when matching to each, then examining any relationship between geometrical properties of the latent space and performance of the target network.\\n\\n4. Figure 4 results are interesting in showing that even limited training on the matching loss is still helpful. Would the target network its full accuracy in Table 1 if training is done in two phase where the first phase will fully train the network on the untrained guide representations (weight initialization), and phase 2 would do the task training? I think this experiment is important for supporting the claim about finding a good initial weight in these models using the representation matching idea. \\n\\n5. At parts, the writing is very verbose. 6.5 pages of the paper is dedicated to the intro, background, and a methods section that contains only modest novelty. Sections 1 and 2 should be written much more concisely to make room for additional experiments. Some of the details of the experiments could also be moved to the appendix. \\n\\n6. There are a few papers that are conceptually very similar to the proposed ideas in this paper but were not cited\", \"https\": \"//arxiv.org/abs/1808.01405\\n\\n7. The last paragraph of introduction talks about the limitations, would fit better at the end of the paper (e.g. in the discussion)\\n\\n8. Line 116: \\\"To that end, we also did not optimize networks to convergence,\\\" this sounds like a serious limitation that could be avoided. If the networks are not trained optimally some of the conclusions may turn out being incorrect. E.g. the observed differences between matching the trained and untrained models. \\n\\n9. Line 257: \\\"We describe the task settings\\\": remove \\n\\nOverall, this paper showcases very interesting observations but falls short at providing concrete insights. The paper in its current format reads premature. The methods are not novel, the observations are interesting but no real insight is offered. This is study has many potentials but they are not met yet and so I don't think it is ready for publication.\", \"questions\": \"1. Figure 3 shows the pattern of error consistency, is this any different from what would be obtained from distillation?\\n\\n2. Related to fig. 3, are the intermediate representations of these two guide models also similar? A similar plot could be made for the intermediate representational similarities which would be very informative.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (Part 1)\", \"comment\": \"Thank you for staying engaged as we approach the finish line, and we\\u2019re happy that we could address some of your concerns. Hopefully we can do more below!\\n\\n> I genuinely think the findings are potentially interesting but also sincerely believe that the paper needs a major revision and another round of review to be ready for publication, especially considering the volume of content added during the revision.\\n\\nWe want to note that these were experiments requested by the reviewer; experiments which agree with the paper. Would the reviewer be happy if for their own paper, a reviewer requested experiments and then used the fact that new experiments exist as the reason for rejection?\\n\\nThis makes it impossible to accept the paper under any conditions. Either we refuse to run experiments, in which case the reviewer rightly votes to reject because we didn\\u2019t answer their questions, or we run experiments, in which case the reviewer votes to reject because we did run the experiments and answer their questions. Particularly in light of the fact that almost all of the experiments are either nulls, for example, distillation does not work, just as we claim, or strengthen the paper such as using new metrics for representational similarity like CKA or ridge regression.\\n\\n> Re W1: I agree with the authors that vanishing gradient alone cannot explain all of the presented results, in particular the shallow FCN experiment. However, I don't think the rebuttal has so far helped convincing me that it is not the case. As I suggested in my initial review, one way to test and rule out this possibility in cases where it could be the cause, is to add auxiliary losses similar to those that were used in the Szegedy et al 2015 paper. The degree to which training models with these additional losses would improve, could be an indicator of the relative importance of vanishing gradient as a cause.\\n\\nWe want to note that vanishing gradients don\\u2019t explain many other results in the paper. Such as the transfer between RNNs and Transformers. As well as the early disconnect results for FCNs where only a few updates are needed before the FCN is in a good state, the guide can be disconnected, and the FCN continues to train correctly.\\nWe will add distillation as a baseline everywhere \\u2014 it does not work.\\n\\nICLR doesn\\u2019t allow reporting additional experiments in this last week. We are happy to add this experiment to the final version of the paper. But note that it cannot substantially change the story.\\n\\n> As mentioned in my original review, I think distillation should be considered as a baseline in all the experiments to help the reader better judge the expected boost between guidance and distillation.\\n\\nWe will do this. But note, distillation does not work for our problems. This is a near-null baseline.\\n> The new figure 11 and table 5 that present the current distillation results are not referred to in the main text and currently all of the distillation experiments are only mentioned within the appendix. To be clear again, distillation should be considered as a baseline in all the experiments and be presented along with the results in the main text/figures/tables.\\n\\nWe will do so.\\n> From the new appendix section I and figure 11, I can't tell if the distillation baseline is trained properly or not. Including more details would be helpful there. For example, the learning rate and the number of epochs for which the model is trained for. The x-axis on figure 11-left shows that the model is trained for 6000 steps but x-axis on the right plot shows up to about 100 epochs. The caption of Figure 11 is not helping in clarifying things either. Why the discrepancy? Is this model trained for 6000 iterations only? If yes, that doesn't sound enough to me. Overall, I'm not confident that the current distillation baseline is done properly.\\n\\nWe followed the same procedure for distillation as we did for training our base networks and guidance experiments. We use 100 epochs of training with a learning rate of 1e-4 and a batch size of 256. For any image classification experiment, this follows from He. et. al. 2016. \\n\\nThe x-axis of the left plot refers to training steps. This refers to the number of optimization steps taken during training over the 100 epochs. We take the average training loss every 80 training steps, which changes the x-axis limits. This average is taken to make the plot cleaner and doesn\\u2019t change any result. The x-axis of the right plot refers to the validation loss at each training epoch. So, the total iterations the model is trained for is 6250 * 80 or about 500000 iterations. This matches the same procedure that is used for all other image classification experiments. \\n\\nWe follow the same procedure for tuning the learning rate of the model as we did with our base and guided networks. This entire procedure has been standardized to ensure a fair comparison.\"}", "{\"title\": \"Response\", \"comment\": \"I appreciate the papers response and have raised my score accordingly.\"}", "{\"title\": \"Response (Part 2)\", \"comment\": \"> Re W3: The reported accuracies don't sound quite right to me considering what is already reported elsewhere. For example [1] reported that fitting a linear decoder on randomly initialized RN18 features could reach ~12% accuracy on Imagenet. The issue is potentially in the way the additional experiment is carried out here which only considers 4000 images for doing the fitting. In any case, the point to be maid from these numbers is that the unit responses in the untrained model are far from being useless for classifying objects. A point that I feel is repeatedly implied in the submission and I don't agree with.\\n\\nFrom [1]: Section 3.8, page 10, \\u201cA linear classifier on the original input images (150K features) achieves a 3.4% test top-1 accuracy\\u201d\", \"nb\": \"To achieve the 12% accuracy the reviewer cites, [1] must sample 31,568 random networks, then use their novel method to combine the features of those 31k randomly sampled networks together, and then train the MLP. As [1] reports, performance for 1 randomly initialized network is 3.4%.\\n\\nIf the reviewer does not agree with us, nor with the citation they provide, we urge them to run this experiment. Both the literature, including [1], and our experiments, agree with our point on the submission: performance of randomly initialized models is very low for object recognition.\\n\\n> Re W5: Sorry for having been vague about this. What I meant was to consider target networks in between the untrained and fully trained networks. E.g. 20%, 40%, 60%, 80% trained networks as guides.\\n\\nThank you for the clarification and the suggested experiment. We were not surprised by the results and will include it in the final manuscript. Since ICLR doesn\\u2019t allow reporting new results in the last week we cannot paste the results here. \\n\\n> additional comment: I noticed that many (all?) of the new appendix results/sections are not referred to in the main text. They should all be referred to in the main text.\\n\\nApologies, we will do so for the final version. We wanted to get something with all of the results into the hands of the reviewers as quickly as we could.\\n\\nWe hope this addresses the reviewer\\u2019s outstanding concerns. Thank you!\\n\\nAnd we hope that addressing the reviewer\\u2019s concerns, both previously and now, has an impact on how the reviewer votes with respect to the paper.\"}", "{\"comment\": \"Thank you for taking the time to read our response! We really appreciate that you\\u2019ve engaged with us.\\nTo address the reviewer's concerns about performance. First, note that our goal was not performance, it was to escape the bad regime that many architectures are stuck in which makes them untrainable, leading them to be completely uncompetitive with state-of-the-art networks like Transformers. That being said, we do demonstrate cases where our performance is objectively very good.\\n1. Our results show that plain vanilla RNNs, with guidance, are competitive with GPT-2 small when they have similar sizes. It is widely believed that the RNN architecture must be changed.\\nGPT-2 was an inflection point in LLMs and Transformers, where many people became convinced that they generalize in a novel way: \\\"Language Models are Unsupervised Multitask Learners\\\", Radford et al. 2019. In many ways this launched the current wave of research and resulted in considerable investment and tens of thousands of citations.\\nShowing that RNNs could always have kept up with this, had we known how to train them demonstrates the real-world performance the reviewer is looking for. In addition, it shows that RNNs can scale, something that the many current ML results say isn't feasible. Our result is not an upper bound, it is a lower bound that demonstrates real-world scaling for RNNs.\\nThis seminal paper from 2019 could have been written about RNNs as they existed in the 1960s and 1970s if only we knew how to train them properly! Guidance is one such way, and now that we have initial networks and the desired final network, future work can reverse engineer how to do this even without guidance. That is a significant and unexpected real-world performance result.\\n2. An added result is that RNNs can improve Transformers. A near-10% improvement to the performance of a Transformer represents a major improvement given that it doesn't require a change to the architecture, the optimizer, the number of parameters, nothing at all. In addition, to the best of knowledge, no one has observed that an RNN can teach a Transformer anything before. Likely, any task for which an RNN performs well can be used to enhance a Transformer. This again represents a real state-of-the-art performance improvement.\\n\\nWe hope that by showing that RNNs performance is competitive at similar scales to GPT-2 and that RNNs can meaningfully improve Transformers, we have assuaged the reviewer's concerns about performance. While not our goal, our results do achieve significant and improved performance.\"}", "{\"comment\": \"To clarify, my role as a reviewer is to evaluate the submission as is, not to judge its potential or a hypothetical future version. The comments and suggestions I provided were intended to enhance the quality of your work. I\\u2019ve provided this feedback in good faith, having spent far beyond what I typically spend on any individual submissions and what I consider as a reasonable ask from any reviewer in terms of time commitment. Despite this, the authors chose to make the matter personal in their last response, painting a false picture that my vote to reject this paper is based on the new experiments added.\", \"here_are_the_main_reasons_for_maintaining_my_vote_to_reject\": [\"some of the experiments carried out during the rebuttal are done in a manner that don\\u2019t boost confidence and instead giving me an impression that they were (understandably given the limited time span of the rebuttal) performed in rush and possibly without being completely vetted. E.g. the distillation baseline experiment that was only performed for one case and not others, the plots and captions that were not completely coherent, and were not properly incorporated into the text. Comments such as \\u201cWe will add distillation as a baseline everywhere \\u2014 it does not work.\\u201d do not boost any confidence in trusting the correctness of the claims.\", \"the changes are not appropriately incorporated into the submission despite having had the possibility to do so. I\\u2019ve commented on the specifics in my previous responses.\", \"some of the comments were ignored altogether, e.g. 1) reworking the text to cut back on the verbose introduction, background, and routine methods to make space for experiments and/or more baselines 2) adding a baseline with auxiliary losses.\", \"To be completely clear, I am not comfortable increasing my score to anything above 3. I think this paper should be rejected, so it can be reworked without being rushed, and that the number of missing experiments and details warrants a complete reworking of the paper and resubmission for fresh review.\"]}", "{\"title\": \"Followup for Reviewer vZFS\", \"comment\": \"Thank you for taking the time to review our work. Since the discussion period is coming to a close, we just wanted to reach out to make sure our response and additional experiments adequately addressed your concerns. Please let us know if you have any additional concerns, we would be happy to clarify! If the answer sufficiently addresses your concerns, we would appreciate if you could adjust your score.\\n\\nThank you,\\nAuthors.\"}", "{\"title\": \"Reviewer Kh2H Response (Part 1)\", \"comment\": \"We thank the reviewer for their constructive review. We are glad the reviewer found that the paper tries to solve an interesting problem and found the use of multiple scenarios to be positive.\\n\\n> W1: A similar work exists, such as [1], where authors leveraged relative representations [2] in a teacher-student framework,\\n \\nThank you for bringing this to our attention. We will incorporate this into our related work. However, we want to draw important distinctions between this work and our work. The referenced work uses a distillation process that aligns relative representations between the student and teacher networks.\\n \\nFirst, it\\u2019s important to note that the setting is different, as is true of any distillation paper. We are applying guidance across much larger, deeper target networks. Moreover, we would like to emphasize our setting uses different architectures, which is not covered in the paper.\\n \\nHowever, there are further reasons why guidance may have broader applicability than the cited paper. While the cited paper refers to relative representations, we note that this idea is highly similar to an approach called Representational Similarity Analysis (RSA) [1] in neuroscience. RSA compares sets of representations as follows. Given two sets of representations, $R_1 \\\\in \\\\mathbb{n, d_1}$ and $R_2 \\\\in \\\\mathbb{R}^{n, d_2}$, we first find the pairwise distance between every input in the set for both pairs of representations i.e. we compute the representational dissimilarity matrix (RDM). This leads us to two matrices, $RDM_1 \\\\in \\\\mathbb{R}^{n, n}$ and $RDM_2 \\\\in \\\\mathbb{R}^{n, n}$. This is very similar to the idea of relative representations presented in your cited work, where the only difference is that relative representations use anchor points.\\n \\nFirst, as requested by other reviewers, we can confirm that guidance improves results with RSA (See Appendix Section L). Given that we apply RSA at several layers, we are providing a stronger signal to the target network. This almost guarantees the fact that guidance will lead to a stronger result. Finally, this demonstrates the power of guidance over the cited work. Guidance can be applied across several similarity metrics as long as they are differentiable. This allows us to answer many questions that weren\\u2019t possible before. For example, since we apply guidance over multiple activations, we can ask about similarities across layers. We can also modify the similarity function as we see fit to control what information is sent from the guide to the target. Guidance is more general and more powerful.\\n\\n> W3: However, no standard deviation is provided, which limits the interpretation of these results. Moreover, an accuracy of 13.10 is not competitive on this dataset for the image classification task.\\n \\nWe are confused about this comment because we have provided error bars on all reported numbers in all tables and on all loss curves. We are perhaps missing what the reviewer means in this case.\\n \\nWith respect to the comment on low accuracy scores, the goal of the paper was to overcome training difficulties in networks that have known failures. After overcoming these failures, there are many modifications we can make to achieve competitive performance. For example, for the FCN, if we reduce the depth of the network, add a learning rate scheduler, use a larger guide network, etc. Making the networks competitive and achieving a strong result here is a paper in its own right. This paper introduces a technique, guidance, which has strong implications for alternative architectures and understanding inductive biases in neural networks.\\n \\n> Q1: Fine-tuned during the guidance process? Untrained guide networks trained in parallel to the target network, or are they frozen during training\\n \\nThe guide network is frozen, regardless of whether it is trained or randomly initialized. We do not update the parameters of the guide network.\\n \\n> Q2: In Figure 1, when mapping different layers of the guide network to the same layer of the target, it may be important to consider the similarity between these different guide layers.\\n \\nOur apologies, we realize that figure 1 may be a bit confusing visually and could be read as mapping several different layers of the guide network to the same layer of the target network. However, in actuality, this network is unrolled (see the x50 and x12 which indicate 50 FCN blocks and 12 ResNet blocks in the image). In this case, the figure is indicating a 1-1 mapping.\"}", "{\"title\": \"Reviewer rthw Response (Part 3)\", \"comment\": \"> W7: To that end, we also did not optimize networks to convergence\\n\\nThis limitation aims to address a concern that guaranteeing convergence in neural networks is generally difficult and that results in this paper are based on very simple optimization strategies that can be improved upon using better optimizers or regularizers. It is also clear that the loss has not saturated in some loss curves we plotted (see the copy-paste result with an untrained transformer guide in Figure 2). We believe this caveat is fair but not concerning for the interpretation of results comparing untrained and trained guide networks.\\n\\nFirst, our paper aims to analyze inductive biases. The performance difference between untrained and trained guides is interesting. But, the most interesting and important result is that architectural priors are sufficient at guiding the target network. When the untrained guide network achieves significant improvements, the implication is significant: architectural priors are meaningful. Additionally, the training time we provide is significant and overlaps with other papers. The ResNet paper [6] trains networks for 100 epochs. Other sequence modeling works use 100 epochs as well. These papers make claims from the standard training time. There are certain settings where it is guaranteed for results to hold such as copy-paste with RNNs where the difference in performance is evident. Overall, we believe this caveat is justified but will not change the story and interpretation of the paper. \\n\\n> Q1: error consistency, is this any different from what would be obtained from distillation?\\n\\nWe can rerun this baseline with a basic distillation approach [3] and show results in Appendix Section I.1. We find that error consistency shows similar trends but has a much smaller effect size. Guidance results in error consistency values between guided networks that are comparable with the error consistency between guides. Distillation cuts this value in half, resulting in an error consistency of 0.26 between distilled networks.\\n\\n> Q2: Related to fig. 3, are the intermediate representations of these two guide models also similar?\\n\\nGreat question! We measure the similarity of the internal representations for the two networks using CKA. We make a line plot where the x-axis is the layer index and y-axis is the CKA similarity for a set of 1000 ImageNet images between a layer of ResNet and a layer of ViT-B. We find that the CKA is lower at later layers for both the comparison between ResNet-18 and ViT-B. However, the initial layers are quite similar. We include this in Appendix Section G with a longer discussion. \\n\\n[1] Li and Papyan. Residual Alignment: Uncovering Mechanisms of Residual Networks. Neurips 2023.\\n\\n[2] He, Liu and Tao. Why Residuals Work? Residuals Generalize. arXiv, 2019. \\n\\n[3] Hinton et. al. Distilling the Knowledge in a Neural Network. arXiv, 2015.\\n\\n[4] Hu et. al. Low Rank Simplicity Bias in Neural Networks. TMLR, 2021. \\n\\n[5] Fan et. al. Intrinsic dimension estimation of data via principal component analysis. arXiv, 2010.\\n\\n[6] He et. al. Deep Residual Learning for Image Recognition. CVPR 2016.\"}", "{\"title\": \"Followup for Reviewer rthw\", \"comment\": \"Thank you for taking the time to review our work. Since the discussion period is coming to a close, we just wanted to reach out to make sure our response and additional experiments adequately addressed your concerns. Please let us know if you have any additional concerns, we would be happy to clarify! If the answer sufficiently addresses your concerns, we would appreciate if you could adjust your score.\\n\\nThank you,\\nAuthors.\"}", "{\"title\": \"Reviewer Kh2H Response (Part 2)\", \"comment\": \"> Q3: How might the results differ if an alternative metric is used to calculate the similarity?\\n \\nThis is a great question! We include an experiment where we guide with RSA and linear regression. We see larger improvements in performance when optimizing with RSA and linear regression, likely due to added degrees of freedom [3] associated with the similarity metrics. We refer to Appendix Section L. \\n\\n\\n| Metric \\t| ImageNet Validation Accuracy |\\n|----------------------------|------------------------------|\\n| Trained ResNet-18; CKA \\t| 7.50 \\t|\\n| Trained ResNet-18; RSA \\t| 11.02 \\t|\\n| Trained ResNet-18; Ridge | 9.46 \\t|\\n| Untrained ResNet-18; CKA | 13.50 \\t|\\n| Untrained ResNet-18; RSA | 11.74 \\t|\\n| Untrained ResNet-18; Ridge | 15.69 \\t|\\n \\n> Q4: What impact would guiding only the final layers of the target network have on the results?\\n \\nWe include an experiment where we only provide guidance on the final layer of the target network, along with other ablations in Appendix Section M. In general, the impact depends on the experiment. For example, we found that guiding using only the final layer hurt Deep FCN performance. But, we found that it could be useful for RNN performance on copy-paste. In general, this establishes that guidance has more utility than just fixing the credit assignment problem of gradients in very deep networks i.e. there is more to guidance than making it easier to propagate gradients back in the network.\\n \\n> Q5: In Table 3, the RNN achieves 100% accuracy and is used as a guide network. What does this imply? Given that the work aims to train untrainable networks using well-performing networks, why use a network that appears to be overfitting as a guide for training the target network?\\n \\nApologies, we are confused by the comment. The 100% accuracy is evaluated in a held-out test set. The work we cite also achieves 100% accuracy on the same parity task. Could the reviewer clarify how this result indicates overfitting?\\n\\n\\n\\n[1] Kriegeskorte et. al. Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in systems neuroscience (2008)\\n\\n[2] Hinton et. al. Distilling the Knowledge in a Neural Network. arXiv, 2015.\\n\\n[3] Klabunde et. al. Similarity of neural network models: A survey of functional and representational measures. arXiv 2023.\"}", "{\"title\": \"Followup for Reviewer seP7\", \"comment\": \"Thank you for taking the time to review our work. Since the discussion period is coming to a close, we just wanted to reach out to make sure our response and additional experiments adequately addressed your concerns. Please let us know if you have any additional concerns, we would be happy to clarify!\\n\\nThank you,\\nAuthors.\"}", "{\"title\": \"Reviewer seP7 Response (Part 1)\", \"comment\": \"We thank the reviewer for their constructive and thoughtful review. We are glad the reviewer found the paper intuitively presented, found guidance to be a useful tool for understanding neural network initialization and inductive biases, and appreciated the different baselines and settings. We address specific points and questions here.\\n\\n> W1: field has mostly converged on the transformer architecture for most tasks\\n\\nThank you! This is a great point. We agree that the presentation of our work should emphasize the potential for understanding inductive biases further and should highlight this more in the paper. We had some of these points in the conclusion but didn\\u2019t emphasize them further in the paper. In general, we would also like to mention that we haven\\u2019t given up on making these architectures viable! There are many ways to make improvements to our optimization which we plan to dedicate future work towards. RNNs are a potentially exciting avenue to explore given their avoidance of quadratic complexity. With additional scale and evaluation, we hope to find that these architectures are competitive and think our results point to some potential breakthrough. However, we realize that this is not necessarily in scope for the paper and not entirely justified from the current results. We will modify the conclusion and will do a more thorough edit through the paper to emphasize the utility of our work for understanding inductive biases and finding new initializations.\\n\\n> W2: The gains for ImageNet are still quite small overall\\n\\nThis is a fair point and we realize that the gains we report over FCNs are nowhere close to being useful object detectors. The goal of our paper was to overcome a difficult property of training these architectures. And, as mentioned, this has great implications for understanding inductive biases in neural networks as well as finding new initialization schemes. \\n\\nThere are many potential takeaways our results point to. First, it could be the case that it is not possible to pass along the entire inductive bias or CKA is not the correct function to do this over. Indeed, we have found that a metric with more degrees of freedom like ridge regression is useful for improving the results of guidance. We may find that there are only certain mathematical aspects of our guide network representations that are useful for supervising representations of the target network. This could lead to better designed similarity functions that are simpler and only pass relevant features from the guide network to the target network.\\n\\nOf course, another conclusion is that guidance will likely need additional tuning. Better representational similarity functions is one thing but we can likely better design our fully connected network to balance between depth and width. It\\u2019s likely that architectural design will still matter. Using variational hidden dimension and better optimization techniques like a warmup scheduler in tandem with guidance may lead to better results. In this paper, we intentionally chose networks with extreme failures. \\n\\nHaving overcome problems with training these architectures, we argue that the many of the tricks that made other networks improve can be applied here. We look forward to spending time making these networks useful. In the meantime, we will be sure to rewrite the paper to focus on guidance as a tool to understand inductive bias a bit more.\\n\\n> W3: Inconsistent gains across trainable versus untrainable without clear intuition as to why this might be happening.\\n\\nWe hope to provide some intuitions here. We believe guidance makes a distinction between architectural and knowledge-based priors. Architectural priors could refer to locality or translational equivariance while training priors could refer to internal regularization or sparsity.\\n\\nThe conclusion of guiding a Deep FCN with ResNet-18 seems to be that architectural priors are more useful for overcoming overfitting and obtaining a better image classifier. One immediate explanation is that the randomly initialized guide network has an easier representation space to align with. Training a network may overwhelm the architectural priors or introduce further changes into the representation space that are difficult to replicate, reducing the transfer of architectural information from the ResNet to the Deep FCN. This is emphasized by the use of CKA which is a weak notion of similarity.\\n\\nThis explanation also explains why the trained guide network leads to improvements. The architectural prior is still present. But the space of the trained guide network is noisier and contains richer features than what is just present with the randomly initialized architecture. \\n\\nThis is an exciting finding for making distinctions between trained and untrained networks as well as architectural priors and trained priors.\"}", "{\"title\": \"Reviewer vZFS Response\", \"comment\": \"We thank the reviewer for their constructive review. We are glad the reviewer found the paper well written and enjoyable to read, the technique novel, the potential to revitalize dead neural architectures, and appreciated the experimentation on multiple modalities. We address the concrete points below.\\n\\n> W1: Smaller datasets. Higher number of ablations\\n\\nWe agree! We include experiments with a smaller dataset, CIFAR-10, using the Deep FCN target network and ResNet-18 guide network. We also include some ablation experiments with the copy-paste task where we guide an RNN with a transformer. Our goal is to quickly test how (1) the number of layers used in guidance affects results and (2) whether guiding one layer with multiple guide layers is generally useful. In the case of an RNN guided by a transformer, the RNN representations at each layer received multiple guide layers as supervision including the layer normalization representations and linear layers. We show results in Appendix Section M. \\n\\nThe answer to (1) for deep networks is that guiding earlier layers leads to better results. With CIFAR-10, we find that guiding the first 5 layers leads to a 20% improvement on accuracy. However, guiding later layers only lead to a decrease in guided network performance. However, we find guiding at least one layer leads to a dramatic improvement in performance. With RNNs, we found that guiding later layers is more important, but overall, guiding all layers was useful \\n\\nThe answer to (2) is more complex. When guiding a Deep FCN with a randomly initialized ResNet-18 guide, the general finding is that our simplest layer mapping is best. In this case, when we apply guidance with multiple guide network layers as supervision to a single target network layer, this hurts performance. In the case of the transformer-guided RNN, we found that all layers of supervision were useful. Removing the multiple layers of supervision hurt copy-paste performance. The largest drop in performance came from removing the layer normalization representations from guiding an RNN layer. \\n\\n> W2: see more in depth discussion of the ViT to CNN/MLP experiments which are hinted at in figure 3. \\n\\nOur apologies. We should have included more discussion. Similar to guidance with a ResNet-18, we applied guidance with a ViT-B-16. We applied a similar mapping of guide network layers to target network layers. In this case, that included internal layers of the transformer architecture i.e. multi-head attention layers, layer-norm layers, and linear layers. We achieved an accuracy of 11.33% with a randomly initialized ViT-B guide network, which establishes that guide network size doesn\\u2019t necessarily correspond with outcome. \\n\\nIn general, we only use ViT to guide the Deep FCN in creating figure 3, with the goal of analyzing error consistency between the guided networks. In error consistency, we measured the overlap in model predictions when guided by different networks. We find that this reconstructed the patterns found\"}", "{\"title\": \"Followup for Reviewer Kh2H\", \"comment\": \"Thank you for taking the time to review our work. Since the discussion period is coming to a close, we just wanted to reach out to make sure our response adequately addressed your concerns. Please let us know if you have any additional concerns, we would be happy to clarify! If the answer sufficiently addresses your concerns, we would appreciate if you could adjust your score.\\n\\nThank you,\\nAuthors.\"}", "{\"title\": \"Reviewer rthw Response (Part 2)\", \"comment\": \"> W4: For example, what are the geometrical differences as a result of matching to the untrained and trained networks?\\n\\nWe agree that we could have provided more intuitive explanations for the improvements seen with randomly initialized guide networks. The provided explanation is a reasonable interpretation that for certain networks i.e. the representation space of untrained guide could be easier for the target network to match, particularly when the architectures are strikingly different. Another way of presenting this finding is that guidance is finding a distinction between learned priors and architectural priors when preventing undesirable features of training target networks. For example, it seems architectural aspects of CNNs are useful for preventing overfitting in the Deep FCN. Similarly, memory incorporation improves in an RNN when guiding with a randomly initialized transformer as seen with improved copy-paste performance. These architectural priors don\\u2019t disappear when the guide network is trained but are more difficult to replicate due to the target network having to replicate both learned and architectural priors, assuming that the learned prior isn\\u2019t as useful for overcoming the gap in performance. It\\u2019s likely that some aspect of the ResNet is useful for preventing overfitting. This could be sparsity, the distribution of the eigenvalues, etc. But we believe this is an exciting finding that showed whether certain training properties were prevented by inductive biases in the architecture or inductive biases gained through optimization and knowledge! We plan to dedicate further work to understanding these distinctions. \\n\\nWe can indeed verify that replicating a randomly initialized guide network activation space is easier based on Figure 6 and 7 in Appendix Section F for all results where the randomly initialized guide performs better than the trained guide. The CKA dissimilarity optimizes more quickly or starts out smaller. However, note that this still isn\\u2019t necessarily the whole story. The Deep ConvNet matches an untrained guide network more quickly but the trained guide network leads to better results.To further verify this, we can consider results with another similarity metric, ridge regression as reported in Figure 16. Ridge regression provides a linear mapping from one representation space to another and we find that ridge regression results with an untrained guide network lead to a much lower dissimilarity loss. Of course, questions remain about a deeper explanation of the geometric differences between networks guided by a trained or untrained guide. We analyze both the effective rank [4] and intrinsic dimensionality (ID) via PCA [5]. We show results in Appendix J. In general, it seems that these geometric differences don\\u2019t fully explain guidance. Guidance with an untrained network has similar effective rank and ID to a network trained with no guidance. \\n\\n> W5: Multiple guides through training\\n\\nWe apologize, but the current description is a bit unclear to us on how we would guide with different networks through training. Do you mean applying guidance with trained and randomly initialized guides at different stages? \\n\\n> W6: Would the target network its full accuracy in Table 1 if training is done in two phase where the first phase will fully train the network on the untrained guide representations (weight initialization), and phase 2 would do the task training?\\n\\nThis is a great point! We had this additional baseline after the paper submission. The main issue with our experiment is that network initialization is not dependent on a task nor is dependent on data. To fix this, we modify guidance to better support initialization as follows. We first maximize the CKA between the layers of the target FCN and guide randomly-initialized ResNet-18 for 300 training steps. Critically, this is done with noise so we don\\u2019t use real images. The noise consists of samples from an independent Gaussian with mean 0 and standard deviation 1. Afterwards, we take the target FCN and train this on ImageNet without any guidance. We show results in Appendix Section K. We find that overfitting is still prevented. This better supports our point on initialization. We will plan to update Figure 4 to reflect this finding.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
C2uViDZmNp
Information Subtraction: Learning Representations for Conditional Entropy
[ "Keng-Hou Leong", "Yuxuan Xiu", "Victor Wai Kin Chan" ]
The representations of conditional entropy and conditional mutual information are significant in explaining the unique effects among variables. The previous works based on conditional contrastive sampling have successfully eliminated information about discrete sensitive variables, but have not yet addressed continuous cases. This paper introduces a framework of Information Subtraction capable of representing arbitrary information components between continuous variables. We implement a generative-based architecture that outputs such representations by simultaneously maximizing an information term and minimizing another. The results highlight the representations' ability to provide semantic features of conditional entropy. By subtracting sensitive and domain-specific information, our framework effectively enhances fair learning and domain generalization.
[ "conditional entropy", "conditional representation learning", "self-supervised learning" ]
Reject
https://openreview.net/pdf?id=C2uViDZmNp
https://openreview.net/forum?id=C2uViDZmNp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uILuunTuQN", "tNANZssU7a", "qyix53gqKA", "nmO4fzea33", "nEiscvk2kM", "mIyrwFZVSc", "lRu3ovzZkv", "kyn5MK1BdM", "ggN7bDaRTe", "fPqmD8GzNB", "a2hNjdPFeB", "Ynvukn7yPR", "W0AC6DbEo7", "NolJaZvOMF", "LrcyA4AS2W", "AfrWJlu3Lk", "1W9AVrBVdJ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732705819709, 1732705306754, 1732705740888, 1732705160901, 1730285492104, 1732786986091, 1732712372865, 1732705388647, 1730698366673, 1732705957193, 1730286145449, 1737523817275, 1734487322511, 1732705565686, 1732706099747, 1732707495465, 1730670629648 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7100/Authors" ], [ "ICLR.cc/2025/Conference/Submission7100/Authors" ], [ "ICLR.cc/2025/Conference/Submission7100/Authors" ], [ "ICLR.cc/2025/Conference/Submission7100/Authors" ], [ "ICLR.cc/2025/Conference/Submission7100/Reviewer_zrA2" ], [ "ICLR.cc/2025/Conference/Submission7100/Reviewer_KedS" ], [ "ICLR.cc/2025/Conference/Submission7100/Reviewer_EVYs" ], [ "ICLR.cc/2025/Conference/Submission7100/Authors" ], [ "ICLR.cc/2025/Conference/Submission7100/Reviewer_EVYs" ], [ "ICLR.cc/2025/Conference/Submission7100/Authors" ], [ "ICLR.cc/2025/Conference/Submission7100/Reviewer_Q4Lr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7100/Area_Chair_uT2v" ], [ "ICLR.cc/2025/Conference/Submission7100/Authors" ], [ "ICLR.cc/2025/Conference/Submission7100/Authors" ], [ "ICLR.cc/2025/Conference/Submission7100/Authors" ], [ "ICLR.cc/2025/Conference/Submission7100/Reviewer_KedS" ] ], "structured_content_str": [ "{\"title\": \"Response for the second weakness\", \"comment\": \"Here are the responds for the second weakness.\", \"q2\": \"Experimental Section: The section lacks comparability to prior work. As I understand it, it stands very much isolated and it's hard for me to estimate the significance of the contribution the authors have made. If existing models cannot be applied for comparison, I'd still expect the authors to come up with other, simpler, baseline architectures against which to compare.\\nIt is unclear to me whether the reported values come from train, validation or test splits. The lack of standard deviation (suggesting no cross validation was used) makes it hard to estimate the significance of the results. Additionally, the chosen \\\"real-world\\\" datasets seem very simple to me.\\nOverall, unfortunately, the experimental section does not convince me.\", \"a2\": \"We fully agree that including comparative experiments can underscore the significance of our findings. As mentioned previously, we are currently in the process of supplementing our study with two contrastive-based baselines, which will be included in the final camera-ready version.\\n\\nIn our work, different sections utilize distinct train/test splits. In the synthetic case (Sections 5.1 to 5.3), the data is generated from the same provided distribution. Therefore, there is no necessity for splitting, and the entire dataset serves as the training set. In contrast, in the real cases (Sections 5.4, 5.5), where empirical data are employed, the assumption of i.i.d. can hardly be sustained. Consequently, the data in these sections have been partitioned into training and testing sets. We have incorporated the corresponding explanation in Appendix A, B, C, D, and E, accordingly.\\n\\nWe wish to note that the adult dataset is one of the most commonly used examples in fair learning. The main purpose of this paper is to propose a fundamental model, so we wish to demonstrate that our architecture can perform basic classification tasks effectively. We choose not to work on Computer Vision tasks, as conditional mutual information in the context of CV may not be straightforward to interpret. Besides, we believe that the plant cover dataset used in Section 5.5 is an excellent example of a domain generalization problem. The data distribution within this dataset aligns with actual geographical features and ecological distributions, providing an intuitive explanation of the relationship between input features and domains.\\n\\nThank you again for your time and your insightful questions.\"}", "{\"title\": \"Response for the questions\", \"comment\": \"Here are the responds for the questions.\", \"q1\": \"Did you try adding a hyper-parameter to one of the terms in the loss? Could that allow for finer control by the user on the learned representation?\", \"a1\": \"Thank you for your valuable feedback. We did incorporate weights into Equation (10) to balance the losses, and we apologize that they were not explicitly stated. It is indeed necessary to revise Equation (10) and conduct a sensitivity analysis. The sensitivity analysis results of Sections 5.1 and 5.3 have been provided in Figure G.1 of Appendix G. The results of other sections will be available in the camera-ready version.\", \"q2\": \"Did you have issues in training stability?\", \"a2\": \"Yes, as reviewer KedS has noted, MINE is well-known for its instability during training. This is primarily because minor parameter updates in the neural network can cause significant fluctuations in the upper bound estimated by MINE, while simply reducing the learning rate can lead to issues such as slow training. We should acknowledge that careful selection of parameters is required during training, which is almost a universal issue for all frameworks utilizing MINE, and is one of the tradeoffs for enjoying the simplicity of MINE. We have provided the learning rates and network sizes for all the experiments in Table A.1, B.1, C.1, D.1, and E.1.\", \"q3\": \"Did you test the effectiveness of the proposed method on high dimensional data? How did the computational cost scale?\", \"a3\": \"The focus of our current manuscript is primarily on the theoretical framework. We anticipate exploring very high-dimensional and complex datasets in our future work. In fact, in this work, we have utilized datasets with a dimension greater than 100 for variable X in Section 5.4, and with a dimension of 10 for variable C in Section 5.2.\\n\\nRegarding computational cost, our framework incorporates two neural networks. The size of the neural network's hidden layers is determined primarily by the complexity of the relationships between the input X, target Y, and conditional variable C, rather than the input dimension itself. Regardless of the size of the input dimension, our architecture remains consistent. Therefore, at this stage, there is no clear relationship between the input dimension and computational cost.\\n\\nThank you again for your time and your insightful questions.\"}", "{\"title\": \"Response for the first weakness\", \"comment\": \"First of all, please allow us to express our gratitude for your valuable suggestions. Here are the responds for the first weakness.\", \"q1\": \"Related Work Section:The authors write: \\\"While we share similar architectures with these works, their structures are not designed for conditional representations.\\\" It is unclear to me how large the contribution of this paper is. Is the proposed architecture only a slight modification of existing work? Or would a slight modification of existing work suffice to reach the same goal as the authors propose? If so, why is there no comparison to those in the experimental section?\", \"a1\": \"Firstly, our primary contribution lies in the introduction of the concept of information subtraction and demonstrating how it theoretically represents various information sector elements within a Venn diagram. We have also showcased the significant applications of information subtraction, including its role in fair learning and domain generalization.\\n\\nTo realize this concept, we have designed an architecture aimed at achieving this goal. Our generator-discriminator architecture consists of two broad modules, and we have not provided detailed specifications or discussions on which types of neural networks should be employed for each component, because they could be very flexible. For instance, while we mention in the paper that the generator could be an RNN, CNN, or ResNet, we have only utilized FNN. Similarly, in our revised version, we indicate that the discriminator could be MINE, CLUB, or CCMI, but our experiments have revealed that only MINE yields satisfactory results.\\n\\nRegarding the similarity to related works, it is important to clarify that many related works employ a generator-discriminator architecture, but the specific implementation of our FNN+MINE structure, the optimization logic, and the optimization objectives are uniquely ours. Other methods used in the related works, such as contrastive-based and surprised-based approaches, are more or less inapplicable within the context of our information subtraction concept. These methods have their own set of challenges and future research trends, which further highlight the distinctiveness of our approach. To avoid any confusion, we think it will be more appropriate to the similarity of objectives instead of architecture in line 164.\\n\\nIn accordance with the suggestions from you and other reviewers, we are currently in the process of supplementing two contrastive-based baselines, one for discrete scenarios [1] and another for continuous scenarios [2]. The code for these has been successfully replicated; however, the results significantly underperform compared to our method. We are currently investigating whether this is due to issues with experimental parameters or if their contrastive-based frameworks are ineffective in carrying conditional mutual information within our experimental context. Therefore, in this rebuttal revised version, we have not included yet the comparative experimental results. Nevertheless, we commit to incorporating these findings in the final camera-ready version.\\n\\n[1] Martin Q Ma, Yao-Hung Hubert Tsai, Paul Pu Liang, Han Zhao, Kun Zhang, Ruslan Salakhutdinov, and Louis-Philippe Morency. Conditional contrastive learning for improving fairness in self-supervised learning. arXiv preprint arXiv:2106.02866, 2021.\\n\\n[2] Yao-Hung Hubert Tsai, Tianqin Li, Martin Q Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, and Ruslan Salakhutdinov. Conditional contrastive learning with kernel. In International Conference on Learning Representations. 2022.\"}", "{\"title\": \"Response for the weakness\", \"comment\": \"First of all, please allow us to express our gratitude for your valuable suggestions. Here are the responds for the weakness.\", \"q1\": \"Line 168: \\u201cBased on our previous work (?)\\u201d - this is a strong hint to the identity of the authors which breaks the double-blind regime of the reviewing process.\", \"a1\": \"We do not consider that this approach disobeys the double-blind review process, as it does not disclose excessive information. We just want to put a placeholder here without directly showing our names in the anonymous manuscript. However, we do realize that such a description could allow reviewers who are familiar with our previous work to infer our identity during the review process. We apologize for this flaw.\", \"q2\": \"The novelty might be very limited here. Specifically, the use of MINE-based methods might be the major novelty here.\", \"a2\": \"The use of MINE to generate representations is actually the major novelty of our previous research. In this manuscript, the major contribution lies in the introduction of Information Subtraction, which is a concept achieved by generating representation for conditional entropy. We have formulated the mathematical expressions necessary to achieve this objective and proposed a neural network architecture to implement it. Furthermore, we have discussed the significance of this concept and its effectiveness in downstream tasks including domain generalization and fair learning.\", \"q3\": \"There is no discussion about the failing points of the proposed method. What happened when the condition variable X and the target Y are entangled in more complex ways? How will the learned representation Z be affected?\", \"a3\": \"We greatly appreciate your suggestion. You make a valid point that when X and Y are highly entangled, the generated Z is prone to failure because H(Y|X) is typically small, making information subtraction more challenging. However, designing experiments to validate this is quite difficult, primarily because there is no universally accepted measurement for the entanglement between variables. The complexity of the relationship between two variables cannot be directly quantified using correlation or mutual information alone. Nevertheless, we agree that in future work, such metrics could be employed to identify the \\\"falling points\\\" you mentioned.\", \"q4\": \"The formulation and the presented Algorithm are not clear.\", \"a4\": \"Thank you for your suggestions. We have made revisions to Equation (2, 4, 5, 6, 10) and Algorithm (1). If you find any issues with the revised version, we would be grateful if you could specify the detailed concerns. This feedback would enable us to address these points and enhance the clarity and presentation of our manuscript.\"}", "{\"summary\": \"This paper introduces the Information Subtraction framework, which addresses conditional representation learning for continuous variables. It employs a generative architecture with a generator neural network and two discriminators to stabilize information term estimations. The generator's objective is to maximize information from one discriminator while minimizing it from the other, effectively capturing semantic features of conditional entropy and enhancing fair learning by removing sensitive information.\\nThe authors highlight the significance of conditional representation learning and demonstrate the framework's capacity to decompose signals and produce unbiased representations. Experimental results show that the proposed approach improves fairness in both synthetic and real-world contexts and enhances domain generalization by combining domain-specific factors with universal representations.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper effectively outlines the problem from a methodological standpoint, framing it as a well-defined optimization issue that is clear from a mathematical perspective. It provides a comprehensive overview of related work and their limitations, emphasizing the necessity for a representation learning method that can eliminate information pertaining to continuous sensitive variables. The primary objective is to elucidate the unique effects among variables. Additionally, the inclusion of real-world experiments highlights the paper's contributions to the current research landscape in fair learning and domain generalization applications.\", \"weaknesses\": \"The article presents challenges in terms of readability, as the mathematical loss functions being minimized in practice are not adequately described. The discussion of the quantities to be optimized tends to remain at a high level. Additionally, the architecture description is introduced late in the paper and lacks detail; a more concrete schematic representation with specific input types (e.g., images, tabular data) and detailed analytical loss expressions would be beneficial.\\n\\nMoreover, the existing literature on debiasing appears to be quite extensive regarding the elimination of sensitive information from continuous variables [A], [B]. I found it difficult to discern how this work connects to those prior studies.\\n\\nFinally, the experimental section seems relatively weak in terms of the number of experiments and datasets utilized. Including a straightforward debiasing or fair learning experiment in a real-world context, such as healthcare applications or scenarios involving ethnic biases, along with qualitative visual explanations, would enhance the overall quality of the article.\\n\\n[A] Unbiased Supervised Contrastive Learning, C.A. Barbano et al. ICLR 2023. https://arxiv.org/abs/2211.05568\\n[B] CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information, P. Cheng et al, ICML 2020. https://arxiv.org/abs/2006.12013\", \"questions\": \"Is it necessary to assume that your inputs or latent representations follow a specific distribution (e.g., Gaussian, von Mises-Fisher) to derive your loss functions?\\n\\nAdditionally, the authors mention \\\"Based on our previous work (?)\\\" at one point. It is important to note that ICLR requires authors to cite their own work as they would cite others'. In this case, the authors should state: \\\"Based on the previous work [x].\\\" This would allow readers to locate and review the referenced paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. The exploration of different conditional mutual information estimators is commendable, and including these results with an analysis of their limitations would significantly strengthen the justification for your proposed design. The addition of more baselines is also welcomed. However, in its current form, the paper requires further revisions to incorporate feedback from the reviewers. I encourage the authors to resubmit your work after addressing these points.\"}", "{\"comment\": \"Thanks for the thorough replies.\\nAs it stands, I view the manuscript below the level required for publication. \\nI encourage the authors to incorporate the feedback into a revised version and resubmit their work.\"}", "{\"title\": \"Response for the weaknesses\", \"comment\": \"First of all, please allow us to express our gratitude for your valuable suggestions. Here are the responds for the weaknesses.\", \"q1\": \"Lack of background of CMI estimators: Previous works on estimating (conditional) mutual information are not discussed. In particular, [1] proposes similar framework where discriminators are used for the estimation. It's also unclear to me how the proposed method differs from the existing approaches for estimating conditional mutual information, e.g. MI-Diff+f-MINE in [1]. Are there any significant technical difficulties for turning an estimator into a representation learner? Would the classifier-based approach advocated by [1] results in better representation?\\n[1] Mukherjee et al. CCMI : Classifier based Conditional Mutual Information Estimation.\", \"a1\": \"We would like to express our gratitude for the suggestions provided. The use of deep learning methods to estimate conditional mutual information is indeed relevant to our research. The reason we did not include this in the related works section is that we wished to focus on the aspect that our work belongs to representation learning. Generating representation for conditional mutual information is our central focus, and the CCMI approach you mentioned does not employ the inputs' representations while estimating conditional mutual information. Therefore, we did not include it in the main text of the related works; however, we agree that a short discussion can be placed in Line 226.\\n\\nAddressing your second point, we first clarify that the estimator is trained using samples from a fixed distribution of X, Y, and Z. In contrast, a representation learner consists of two components: an encoder and an information estimator. The input for the information estimator within the representation learner is not fixed, as the distribution of Z keeps optimizing during the training process, which poses a challenge for the estimator.\\n\\nThis leads to your third point. In our preliminary experiments, we tested various information estimators, including MINE, CCMI, and CLUB, and we found that only MINE was effective. This is the reason we did not choose other estimators.\", \"q2\": \"Lack of baselines and ablation studies: the experiments, synthetic or real, don't compare to any other methods. It's, therefore, not obvious how the experimental result should be interpreted. The experimental section would also benefit from ablation studies to analyze the contribution of different components of the architecture and the impact of hyperparameter choices.\", \"a2\": \"The primary focus of our current manuscript is on the theoretical framework, with the major contribution being the proposal of a representation for conditional entropy as an issue and concept. The emphasis of the experimental section is to demonstrate that the representation Z we generate performs better in fair learning and domain generalization compared to the original input X, rather than presenting an incremental or SOTA approach.\\n\\nAnother reason we did not include too many baselines is that most of the related works do not address the representation of conditional entropy. However, as you rightly pointed out, comparisons would better illustrate the importance of our framework. Therefore, we are currently in the process of supplementing two contrastive-based baselines, one for discrete scenarios [1] and another for continuous scenarios [2]. The code for these has been successfully replicated; however, the results significantly underperform compared to our method. We are currently investigating whether this is due to issues with experimental parameters or if their contrastive-based frameworks are ineffective in carrying conditional mutual information within our experimental context. Therefore, in this rebuttal revised version, we have not included yet the comparative experimental results. Nevertheless, we commit to incorporating these findings in the final camera-ready version.\\n\\nAdditionally, the hyperparameter sensitivity analysis you suggested is indeed crucial. We should incorporate a weight lambda into equation (10) and conduct a sensitivity analysis. The experimental results of sensitivity analysis have been included in Table G.1, G.2 of Appendix G.\\n\\n[1] Martin Q Ma, Yao-Hung Hubert Tsai, Paul Pu Liang, Han Zhao, Kun Zhang, Ruslan Salakhutdinov, and Louis-Philippe Morency. Conditional contrastive learning for improving fairness in self-supervised learning. arXiv preprint arXiv:2106.02866, 2021.\\n[2] Yao-Hung Hubert Tsai, Tianqin Li, Martin Q Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, and Ruslan Salakhutdinov. Conditional contrastive learning with kernel. In International Conference on Learning Representations. 2022.\"}", "{\"summary\": \"The authors introduce a framework for learning representations, aimed at applications in fair learning and domain generalization.\\nThe authors use powerful MINE-based estimators to learn representations that share minimal MI with given conditional variables.\\nPrevious methods focused on discrete sensitive variables, while here the authors extend these approaches to continuous cases.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The choice of MINE-based MI estimator to maximize/minimize MI terms is an excellent choice which allows the proposed method to scale to high-dimensional data.\", \"Extending previous work to include continuous variables is an important contribution.\", \"The proposed method might have potential for broad application due to the above points, and the fact that it is independent of the choice of architecture.\"], \"weaknesses\": [\"Line 168: \\u201cBased on our previous work (?)\\u201d - this is a strong hint to the identity of the authors which breaks the double-blind regime of the reviewing process.\", \"The novelty might be very limited here. Specifically, the use of MINE-based methods might be the major novelty here.\", \"There is no discussion about the failing points of the proposed method. What happened when the condition variable X and the target Y are entangled in more complex ways? How will the learned representation Z be affected?\", \"The formulation and the presented Algorithm are not clear.\"], \"questions\": [\"Did you try adding a hyper-parameter to one of the terms in the loss? Could that allow for finer control by the user on the learned representation?\", \"Did you have issues in training stability?\", \"Did you test the effectiveness of the proposed method on high dimensional data? How did the computational cost scale?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response for the weaknesses 1, 2, 3\", \"comment\": \"First of all, please allow us to express our gratitude for your valuable suggestions. Here are the responds for the weaknesses 1, 2, 3.\", \"q1\": \"The article presents challenges in terms of readability, as the mathematical loss functions being minimized in practice are not adequately described. The discussion of the quantities to be optimized tends to remain at a high level.\", \"a1\": \"We greatly appreciate your suggestions. We have made revisions to Equation (2, 4, 5, 6, 10) and Algorithm (1). If you find any issues with the revised version, we would be grateful if you could specify the detailed concerns. This feedback would enable us to address these points and enhance the clarity and presentation of our manuscript.\", \"q2\": \"Additionally, the architecture description is introduced late in the paper and lacks detail; a more concrete schematic representation with specific input types (e.g., images, tabular data) and detailed analytical loss expressions would be beneficial.\", \"a2\": \"We have revised Figure (3) and its captions to provide a detailed explanation. We do not specify input types simply because we have not put a restriction on them. We kindly request you to review the updated version. If you find any issues with the revised version, we welcome further discussion with you.\\n\\nRegarding the placement of the architecture section, we are open to your guidance on whether it should precede the related work section, or if you believe it would be more appropriate to position the related works elsewhere.\\n\\nQ3. Moreover, the existing literature on debiasing appears to be quite extensive regarding the elimination of sensitive information from continuous variables [A], [B]. I found it difficult to discern how this work connects to those prior studies.\\n[A] Unbiased Supervised Contrastive Learning, C.A. Barbano et al. ICLR 2023. https://arxiv.org/abs/2211.05568 \\n[B] CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information, P. Cheng et al, ICML 2020. https://arxiv.org/abs/2006.12013\", \"a3\": \"We sincerely appreciate the literature recommendations provided. We agree that works [A] and [B] are closely related to our work. The paper [A] belongs to supervised contrastive learning, which requires both a target label and a conditional label to be provided. However, the problem addressed in our manuscript is situated within a self-supervised setting, implying that our training loss function does not encompass a target label, but is instead generating a representation that filters the conditional label's information from the input. Consequently, it is not apt to serve as a baseline for comparison with our work, and we find it hard to discuss it in our work.\\n\\n[B] is a deep learning-based mutual information estimator. In this estimator, the input is a fixed distribution of X, Y, and Z, and the estimator is trained using samples from this distribution, with the output being the information estimation. Our representation learner, however, consists of two parts: an encoder and an information estimator. The input to the information estimator in our representation learner is not fixed, as the distribution of Z is not constant during the training process. We have experimented with various estimators in our preliminary experiments, including MINE, CCMI, and CLUB. We did make a lot of attempts on CLUB, as it is the only one that estimates the upper bound, while the others estimate the lower bound, making it particularly useful for our information minimization part. Unfortunately, we found that only MINE worked, which is why we did not choose CCMI and CLUB. Still, we agree that mentioning other methods in Line 226 could be beneficial.\\n\\nIn accordance with the suggestions from you and other reviewers, we are currently in the process of supplementing two contrastive-based baselines, one for discrete scenarios [1] and another for continuous scenarios [2]. The code for these has been successfully replicated; however, the results significantly underperform compared to our method. We are currently investigating whether this is due to issues with experimental parameters or if their contrastive-based frameworks are ineffective in carrying conditional mutual information within our experimental context. Therefore, in this rebuttal revised version, we have not included yet the comparative experimental results. Nevertheless, we commit to incorporating these findings in the final camera-ready version.\\n[1] Martin Q Ma, Yao-Hung Hubert Tsai, Paul Pu Liang, Han Zhao, Kun Zhang, Ruslan Salakhutdinov, and Louis-Philippe Morency. Conditional contrastive learning for improving fairness in self-supervised learning. arXiv preprint arXiv:2106.02866, 2021.\\n[2] Yao-Hung Hubert Tsai, Tianqin Li, Martin Q Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, and Ruslan Salakhutdinov. Conditional contrastive learning with kernel. In International Conference on Learning Representations. 2022.\"}", "{\"summary\": \"The authors introduce a framework for representing arbitrary information components between continuous variables using Information Subtraction. Essentially a generator network is trained to generate a latent representation $Z$ which captures the mutual entropy of target viable $Y$ and conditional variable $X$, without carrying information about $X$ itself.\\nTo achieve this, they use two discriminator networks $A$ and $B$. While $A$ estimates $I(Z,X;Y)$, $B$ estimates $I(Z;X)$. The Objective $I(Z,X;Y) - I(Z;X)$ is back propagated to the generator network.\\n\\nThe authors test their method on two synthetic scenarios and on fair learning and domain generalisation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is well written in general. The challenge being tackled is of interest for part of ICLR's audience. While maybe not novel \\u2013 I am not an expert on the related work \\u2013 the approach of using two discriminator networks to estimate $I(Z,X;Y)$ and $I(Z;X)$ seems reasonable to me.\", \"weaknesses\": \"**Related Work Section**:\", \"the_authors_write\": \"\\\"While we share similar architectures with these works, their structures are not designed for conditional representations.\\\"\\nIt is unclear to me how large the contribution of this paper is. Is the proposed architecture only a slight modification of existing work? Or would a slight modification of existing work suffice to reach the same goal as the authors propose? If so, why is there no comparison to those in the experimental section?\\n\\n**Experimental Section**:\\n\\nThe section lacks comparability to prior work. As I understand it, it stands very much isolated and it's hard for me to estimate the significance of the contribution the authors have made. If existing models cannot be applied for comparison, I'd still expect the authors to come up with other, simpler, baseline architectures against which to compare. \\n\\nIt is unclear to me whether the reported values come from train, validation or test splits. The lack of standard deviation (suggesting no cross validation was used) makes it hard to estimate the significance of the results. Additionally, the chosen \\\"real-world\\\" datasets seem very simple to me. \\n\\nOverall, unfortunately, the experimental section does not convince me.\", \"questions\": \"The *Weaknesses* section outlines the questions and concerns I have.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper presents a method for learning representations of data which maximize mutual information between inputs and targets while minimizing mutual information between the representation and some given sensitive variables. They present applications in fair machine learning, generalization and other tasks. Unlike prior work this method can be applied to continuous target variables. The method is flexible to many architectures and provided experiments demonstrate the method can work in some settings.\\n\\nThe main issues with the work are its potential novelty, limited experimental validation, potential training instability, clarity, and demonstrated scale.The paper could be improved if the authors address reviewers concerns and added additional (and more challenging) baselines, a deeper discussion on potential stability issues, and an investigation into settings with more complex intra-variable relationships. \\n\\nReviewer feedback was consistent and I will recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"Initially reviewer feedback was consistent recommending rejection for reasons stated in above section. In response the authors responded and committed to; adding baselines, studying sensitivity to hyper-parameters, clarifying math/method, cleaning up the presentation. Throughout rebuttal phase reviewers minds were not changed -- suggesting the changes needed to improve the work would be to great to constitute a minor revision. Thus, reviewer feedback remained consistent, advocating for rejection.\"}", "{\"title\": \"Response for the questions\", \"comment\": \"Here are the responds for the questions.\", \"q1\": \"Are there any difficulties or details that need to pay attention to make the training scheme work? MINE has been proved to be difficult to tune.\", \"a1\": \"You are entirely correct in your observation that MINE is well-known for its instability during the training process. This instability primarily stems from the susceptibility of the upper bound used by MINE to significant fluctuations due to minor parameter updates in the neural network. Additionally, simply reducing the learning rate to mitigate these fluctuations can lead to issues such as slow training. We should acknowledge that careful selection of parameters is required during training, which is almost a universal issue for all frameworks utilizing MINE, and is one of the tradeoffs for enjoying the simplicity of MINE. We have provided the learning rates and network sizes for all the experiments in Table A.1, B.1, C.1, D.1, and E.1.\", \"q2\": \"In the paper, the generator takes Y as the input, could X also be given in the input? Would that change the result?\", \"a2\": \"Given our objective to eliminate the information of X from Y to obtain Z, we believe that incorporating X as an input of the generator may not be particularly beneficial, and could potentially lead to Z containing more information from X than desired. We understand your concern might stem from the possibility that when X is input into the generator, the generator might learn the characteristics of X and avoid outputting it. This is a plausible scenario for a black-box generator, and we can further explore in our future work.\", \"q3\": \"Line 168, reference missing. Why is it called an expanded architecture?\", \"a3\": \"It is our previous work. To avoid violating the double-blinded review protocol, we temporarily concealed this paper during the review stage and included a placeholder. The main contribution of this previous work was the use of MINE as a discriminator, and our current submission significantly expands upon this foundation. The placeholder will be filled in the camera-ready version.\", \"q4\": \"For the synthetic experiments, does the proposed method learns a better representation than the baselines such as contrastive-based approach?\", \"a4\": \"As we have mentioned previously, we are currently in the process of supplementing two contrastive-based baselines, one for discrete scenarios [1] and another for continuous scenarios [2]. We commit to incorporating these results in the final camera-ready version.\\n\\n[1] Martin Q Ma, Yao-Hung Hubert Tsai, Paul Pu Liang, Han Zhao, Kun Zhang, Ruslan Salakhutdinov, and Louis-Philippe Morency. Conditional contrastive learning for improving fairness in self-supervised learning. arXiv preprint arXiv:2106.02866, 2021.\\n[2] Yao-Hung Hubert Tsai, Tianqin Li, Martin Q Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, and Ruslan Salakhutdinov. Conditional contrastive learning with kernel. In International Conference on Learning Representations. 2022.\", \"q5\": \"Does the proposed method perform better in the downstream task than other approaches for the fairness and domain invariant learning setting?\", \"a5\": \"The objective of the experiments conducted in Sections 5.4 and 5.5 is to demonstrate that the generated Z provides more effective information than the original input features X and better accomplishes the task of fair and domain-invariant learning, thereby illustrating the effectiveness of information subtraction. Comparing with other approaches is not the focus of our experiments; however, we agree that in subsequent work, when we further explore the application of information subtraction in fair learning, we should conduct a thorough comparison with other relevant studies.\\n\\nThank you again for your time and your insightful questions.\"}", "{\"title\": \"Response for Weakness 4 and Questions 1,2\", \"comment\": \"Here are the responds for the WEAKNESS 4.\", \"q4\": \"Finally, the experimental section seems relatively weak in terms of the number of experiments and datasets utilized. Including a straightforward debiasing or fair learning experiment in a real-world context, such as healthcare applications or scenarios involving ethnic biases, along with qualitative visual explanations, would enhance the overall quality of the article.\", \"a4\": \"We wish to note that the adult dataset is one of the most commonly used examples in fair learning. Besides, we believe that the plant cover dataset used in Section 5.5 is an excellent example of a domain generalization problem. The data distribution within this dataset aligns with actual geographical features and ecological distributions, providing an intuitive explanation of the relationship between input features and domains.\\n\\nWe agree that qualitative visual explanations could be very helpful for better illustrating our results. We have not yet come up with a solution, and we are very interested to discuss with your further suggestion.\\n\\nHere are the responds for the QUESTIONS.\", \"q1\": \"Is it necessary to assume that your inputs or latent representations follow a specific distribution (e.g., Gaussian, von Mises-Fisher) to derive your loss functions?\", \"a1\": \"This is not a requirement, and it is, in fact, one of the strengths of our framework. Compared to structures like VAE, our architecture imposes no distributional restrictions on the input, representation, or output. However, this is not the unique contribution of this paper, hence we did not highlight it. Nevertheless, it is indeed worth mentioning to alleviate any potential confusion.\", \"q2\": \"Additionally, the authors mention \\\"Based on our previous work (?)\\\" at one point. It is important to note that ICLR requires authors to cite their own work as they would cite others'. In this case, the authors should state: \\\"Based on the previous work [x].\\\" This would allow readers to locate and review the referenced paper.\", \"a2\": \"Thank you for clarifying the rules. We originally thought that citing our own paper in the anonymous paper will lead to desk rejection. We apologize for the mistake, but we decided not to put it on the revised version now, or it will be too obvious for the reviewers to notice our identity. The reference will be available in the camera-ready version.\\n\\nThank you again for your time and your insightful questions.\"}", "{\"title\": \"Descriptions and comments of the revised version\", \"comment\": \"Dear reviewers,\\n\\nFirst of all, please allow us to express our gratitude for your valuable suggestions. The major feedbacks include:\\n\\n1. Our contribution is ambiguous: In this manuscript, the major contribution lies in the introduction of Information Subtraction, which is a concept achieved by generating representation for conditional entropy. We have formulated the mathematical expressions necessary to achieve this objective and proposed a neural network architecture to implement it. Finally, we show the effectiveness of our framework in downstream tasks including domain generalization and fair learning.\\n\\n2. Baseline comparison lacking: The primary focus of the experimental section is to demonstrate that the representation Z we generate performs better in fair learning and domain generalization compared to the original input X, rather than presenting an incremental or SOTA approach.Still, we agree that including baselines for comparison will be beneficial for illustrating the effectiveness of our framework. We are currently in the process of supplementing two contrastive-based baselines, one for discrete scenarios and another for continuous scenarios.\\n\\n3. Our work v.s. Information estimator: Our work belongs to representation learning, and we use information estimator as one of the modules in the framework. Some information estimators do not emphasize representation for samples in their framework (such as CCMI), and therefore they are different from representation generators.\\n\\nWe have learned a lot from your feedback and revised our manuscripts accordingly. Here are some major modifications. All the modifications have been highlighted in blue.\\n\\n1. We have made revisions to Equation (2, 4, 5, 6, 10), Figure (3), and Algorithm (1).\\n\\n2. Detailed experiment settings are provided in Appendix A, B, C, D, and E.\\n\\n3. We have provided a sensitivity analysis on the hyperparameter in Appendix G.\\n\\n4. We are still working on the baseline for comparisons. It will be available on the camera-ready version.\\n\\nThank you again for your time and your insightful questions.\"}", "{\"summary\": \"This paper proposes a framework called \\\"Information Subtraction\\\" for learning representation Z that maximizes conditional entropy H(Y|X) or, put in another way, maximizes the conditional mutual information (CMI) I(Z;Y|X). The method applies for continuous variables, which is harder than discrete variables. The proposed framework utilizes an approach similar to generative adversarial training where discriminators are used to maximize or minimize information terms. The authors evaluate the framework's performance on synthetic and real-world datasets, demonstrating its effectiveness in fair learning and domain generalization tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The framework tackles a relatively under-explored and challenging problem of selectively maximizing and minimizing specific information components during representation learning. Previous works in the topic of CMI are more focusing on getting a good estimation, while this work is interested in leveraging CMI for representation learning.\", \"weaknesses\": \"**Lack of background of CMI estimators**: Previous works on estimating (conditional) mutual information are not discussed. In particular, [1] proposes similar framework where discriminators are used for the estimation. It's also unclear to me how the proposed method differs from the existing approaches for estimating conditional mutual information, e.g. MI-Diff+f-MINE in [1]. Are there any significant technical difficulties for turning an estimator into a representation learner? Would the classifier-based approach advocated by [1] results in better representation?\\n\\n**Lack of baselines and ablation studies**: the experiments, synthetic or real, don't compare to any other methods. It's, therefore, not obvious how the experimental result should be interpreted. The experimental section would also benefit from ablation studies to analyze the contribution of different components of the architecture and the impact of hyperparameter choices.\\n\\n[1] Mukherjee et al. CCMI : Classifier based Conditional Mutual Information Estimation.\", \"questions\": \"1. Are there any difficulties or details that need to pay attention to make the training scheme work? MINE has been proved to be difficult to tune.\\n2. In the paper, the generator takes Y as the input, could X also be given in the input? Would that change the result?\\n3. Line 168, reference missing. Why is it called an expanded architecture?\\n4. For the synthetic experiments, does the proposed method learns a better representation than the baselines such as contrastive-based approach?\\n5. Does the proposed method perform better in the downstream task than other approaches for the fairness and domain invariant learning setting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
C25SgeXWjE
Large Language Models Meet Symbolic Provers for Logical Reasoning Evaluation
[ "Chengwen Qi", "Ren Ma", "Bowen Li", "He Du", "Binyuan Hui", "Jinwang Wu", "Yuanjun Laili", "Conghui He" ]
First-order logic (FOL) reasoning, which involves sequential deduction, is pivotal for intelligent systems and serves as a valuable task for evaluating reasoning capabilities, particularly in chain-of-thought (CoT) contexts. Existing benchmarks often rely on extensive human annotation or handcrafted templates, making it difficult to achieve the necessary complexity, scalability, and diversity for robust evaluation. To address these limitations, we propose a novel framework called ProverGen that synergizes the generative strengths of Large Language Models (LLMs) with the rigor and precision of symbolic provers, enabling the creation of a scalable, diverse, and high-quality FOL reasoning dataset, ProverQA. ProverQA is also distinguished by its inclusion of accessible and logically coherent intermediate reasoning steps for each problem. Our evaluation shows that state-of-the-art LLMs struggle to solve ProverQA problems, even with CoT prompting, highlighting the dataset's challenging nature. We also finetune Llama3.1-8B-Instruct on a separate training set generated by our framework. The finetuned model demonstrates consistent improvements on both in-distribution and out-of-distribution test sets, suggesting the value of our proposed data generation framework. Code available at: \url{https://github.com/opendatalab/ProverGen}
[ "logical reasoning", "symbolic provers", "LLMs evaluation" ]
Accept (Poster)
https://openreview.net/pdf?id=C25SgeXWjE
https://openreview.net/forum?id=C25SgeXWjE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zAjcnvFTDa", "wlafnEQQzr", "vt9Jwtj6OM", "uIe2CeYI9P", "rujhVhwrQM", "rteoQEdz6E", "lkKNPDbqtI", "j5sRziMKQw", "hAldc7OY8x", "Z4JsEDoymk", "WoMFLXsuj6", "TYWgYUoI24", "QpgGsplosh", "QR03zD1bu5", "Q4526FNqjY", "Ka09HOT1XD", "KGhPC1KggT", "JxUqY70xvM", "IuMa2CjuGe", "DL3RhSwALU", "Cx3V6f5TaK", "6OjLg9Cfjt", "6GC3qBG5t8", "34qVBSE1ug", "1guiSyJhIb", "1Gf4arfrHA" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1733226868481, 1732512426451, 1733106907358, 1730608933740, 1733227013043, 1732386481875, 1732196723031, 1732256847644, 1732701607620, 1732195807382, 1732196726090, 1732541833679, 1732195103051, 1730264981987, 1735034083937, 1732197095170, 1730704225248, 1729986609614, 1732254422095, 1732195691475, 1732972789950, 1732196897223, 1732193633745, 1732209454638, 1733227069793, 1737523513653 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Reviewer_NAgL" ], [ "ICLR.cc/2025/Conference/Submission2601/Reviewer_oQpi" ], [ "ICLR.cc/2025/Conference/Submission2601/Reviewer_BNmr" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Reviewer_BNmr" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Reviewer_64s6" ], [ "ICLR.cc/2025/Conference/Submission2601/Area_Chair_u1AD" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Reviewer_NAgL" ], [ "ICLR.cc/2025/Conference/Submission2601/Reviewer_oQpi" ], [ "ICLR.cc/2025/Conference/Submission2601/Reviewer_64s6" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Submission2601/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer oQpi (1/3)\", \"comment\": \"> Follow-up Q1: Prior work such as ProntoQA and RuleTaker all come with source code for generation and verification, while others like BoardGameQA, Proofwriter contained chains built in various ways. It is not sufficiently demonstrated how more novel is the top-down approach and how more coherent is the dataset compared to the previous ones.\\n\\nWhile having source code is valuable for reproduction, the key innovation lies in how the data is generated and the inherent properties of the dataset. Our framework introduces several novel aspects:\\n\\n1. **Complex Logic Coverage**: Our dataset incorporates a broader range of logical constructs, such as $\\\\(A \\\\to B\\\\), \\\\(A \\\\oplus B\\\\), \\\\(A \\\\lor B\\\\), \\\\(A \\\\land B\\\\), \\\\(A \\\\land (B \\\\land C)\\\\), \\\\((A \\\\land B) \\\\to C\\\\)$, and others. In contrast, datasets like ProntoQA and RuleTaker are limited to simpler logics such as $\\\\(A \\\\to B\\\\)$ and basic conjunctions and implications.\\n\\n2. **Coherent Language Use**: We have focused on ensuring greater linguistic coherence within our dataset. For example, our sentences are contextually linked and logically consistent, unlike some examples from other datasets, which may seem disjointed or arbitrary:\\n - ProntoQA: \\\"Every tumpus is not angry. Tumpuses are rompuses.\\\"\\n - ProofWriter: \\\"The cat eats the bear. The cat is green. The cat is kind.\\\"\\n - ProverGen: \\\"If Sawyer is a performer, then he either has good dance skills or is a musician.\\\"\\n\\n3. **Ease of Complexity Management**: Our framework allows for straightforward incorporation of new logical constructs by simply adding them to the rule pool. This flexibility contrasts with the need for creating new templates or retraining models in other datasets like ProntoQA, RuleTaker, and ProofWriter.\\n\\nAdditionally, BoardGameQA is not an FOL dataset, making direct comparisons less relevant. Our dataset's focus on FOL ensures that it remains coherent and applicable to the specific domain of logical reasoning.\\n\\n> Follow-up Q2: Deductive reasoning benchmarks all share the same tree-like reasoning chain with some modifications, for example, BoardGameQA also introduces distractions and contradictory facts. To justify this, one needs a clear definition of diversity and quantitative compare it against others\\n\\nWe would like to argue that the mere presence of tree-like reasoning chains does not inherently signify novelty. Rather, the innovation lies in the methods used to generate these chains and their structural characteristics. Moreover, it is important to note that not all deductive reasoning benchmarks utilize tree-like reasoning chains. For instance, the reasoning path in ProntoQA is explicitly linear, as stated in the original paper: \\\"To generate questions that are not overly complex, we restrict the ontologies to be linear\\\"\\n\\nOur paper and the rebuttal already provide a detailed explanation of the generation process. Regarding structural characteristics, as addressed in our response to Reviewer NAgL, our dataset encompasses problems with highly diverse reasoning chains.\\n\\n\\n| | Easy | Medium | Hard |\\n|-----------|------|:-------|------|\\n| # Unique logic skeletons | 85 | 221 | 494 |\\n| Total number | 500 | 500 | 500 |\\n\\nAs a comparison, other datasets do not have such diverse reasoning chains (data from FOLIO[1]).\\n\\n| | RuleTaker | ProofWriter | FOLIO |\\n|--------------------------|------|:-------|------|\\n| # Unique logic skeletons | 101 | 101 | 76 |\\n| Total number | 500k | 500k | 1435 |\\n\\nAgain, BoardGameQA is not an FOL dataset, making direct comparisons less appropriate.\\n\\n[1] Han, Simeng, et al. \\\"Folio: Natural language reasoning with first-order logic.\\\" arXiv preprint arXiv:2209.00840 (2022).\\n\\n> Follow-up Q3: As mentioned above, work such as ProntoQA and RuleTaker all come with source code for generation, where one can also customize for complexity.\\n\\nPlease see our response for Follow-up Q1.\\n\\n> Follow-up Q4: Comparing models trained on 1K data versus 5k does not make any sense even if the former is trained for 5 more times. I do not think one can reasonably draw any conclusion from this comparison.\\n\\nAs mentioned in our response to Q5, FOLIO is trained with 1k examples because its training set only contains 1k instance. \\nEven without considering FOLIO, the comparison results with proofwriter, which also contains 5k samples, still highlight that our dataset is of high quality. \\n\\n> Follow-up Q5:\\n> \\n> Author: Our study shows that LLMs still struggle significantly with complex reasoning tasks, particularly those involving long chains of reasoning and intricate logic.\\n> \\n> Reviewer oQpi: This is also sufficiently demonstrated in prior work such as ProntoQA, RuleTaker, BoardGameQA, and Proofwriter.\\n\\nWe would like to point out that datasets like ProntoQA, ProofWriter and RuleTaker do not invlove intricate logics (see our response for Follow-up Q1).\"}", "{\"title\": \"Response to Submission2601 Authors\", \"comment\": \"I appreciate the authors' timely and thoughtful response.\\n\\n1. On experiments with corrupted dataset without universal rules:\\n\\nThat makes sense that if the generated proofs are sufficiently generic, the model would be less likely to exploit background knowledge to use shortcuts when reasoning. Though in principle it is possible (but I think it's unlikely) that \\\"real world\\\" proofs about specific topics would be correlated with specific proof structures or patterns, which the model may be able to learn and exploit.\\n\\nThe experimental results on the \\\"corrupted\\\" examples where universal rules are removed look interesting. Though I wonder if there is a confounding effect, for example, by not having any examples of universally-quantified rules in the prompt, the model may be less inclined to use modus ponens at all. One way to control for this would be to add examples of universally-quantified rules that are irrelevant. However, even this experiment would have potential confounders where the presence of irrelevant information may negatively affect the model's reasoning ability. In general, it seems rather difficult to test for this effect cleanly. More ideally, there would be a source of rules that are OOD from the model's training (perhaps from a domain or game that was released after the model's training cutoff, or a small human-annotated set of rules, or fictional rules).\\n\\n2. On revisions:\\n\\nI appreciate the authors' revisions that more accurately compare with previous work, and the new discussion of data contamination in the paper.\\n\\n3. On the coverage of proof generation:\\n\\nI thank the authors for their clarification on the coverage of the proof generation. When generating logical forms, are compositional forms generated? (e.g., nested conjunction within a disjunction within a universally-quantified rule) And while I do have a better understanding now about the generative process of logical forms, I am still a bit unclear on what deduction rules (i.e., rules of inference) are generated. Does the dataset focus on modus ponens? Or are other deduction rules also generated? (e.g., such as conjunction rules such as given A is true and B is true, conclude that A & B is true, or rules involving negation, etc)\\n\\nIf only modus ponens is used, then this inference system is better compared to a [Hilbert system](https://en.wikipedia.org/wiki/Hilbert_system) (as opposed to, say, natural deduction), which would be perfectly valid.\\n\\n4. On few-shot prompting experiments:\\n\\nThe 5-shot few-shot prompting results look very interesting, and it doesn't seem like there is a significant different difference in performance relative to 2-shot prompting.\\n\\nIn light of the proposed revisions, I will raise my score accordingly.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for the response\\n\\n> Generation Process: Our framework employs a top-down approach to generate logic skeletons, contrasting with ProntoQA\\u2019s reliance on predefined logic templates. This method avoids the risk of generating disconnected or contradictory premises, ensuring more coherent datasets.\\n\\nPrior work such as ProntoQA and RuleTaker all come with source code for generation and verification, while others like BoardGameQA, Proofwriter contained chains built in various ways. It is not sufficiently demonstrated how more novel is the top-down approach and how more coherent is the dataset compared to the previous ones.\\n\\n> Symbolic Prover Integration: Our framework includes a symbolic prover, allowing for comprehensive coverage of possible logic expressions and the generation of diverse reasoning trees. In contrast, ProntoQA's approach is limited to predefined reasoning paths, which restricts the diversity of reasoning trees.\\n\\nDeductive reasoning benchmarks all share the same tree-like reasoning chain with some modifications, for example, BoardGameQA also introduces distractions and contradictory facts. To justify this, one needs a clear definition of *diversity* and quantitative compare it against others\\n\\n> Complexity Control: Our framework offers extensive customization of reasoning steps and complexity, which is not available in previous datasets including ProntoQA. We believe these aspects collectively contribute to the novelty and significance of our work. We are pleased to note that these contributions have been recognized by the other three reviewers.\\n\\nAs mentioned above, work such as ProntoQA and RuleTaker all come with source code for generation, where one can also customize for complexity.\\n\\n> W3 W4\\n\\nMy concerns are addressed\\n\\n> To ensure fairness in comparison, we optimized training configurations through various hyperparameter experiments\\n\\nComparing models trained on 1K data versus 5k does not make any sense even if the former is trained for 5 more times. I do not think one can reasonably draw any conclusion from this comparison. \\n\\n> Here are the key insights provided in our work:\\n> Our study shows that LLMs still struggle significantly with complex reasoning tasks, particularly those involving long chains of reasoning and intricate logic.\\n\\nThis is also sufficiently demonstrated in prior work such as ProntoQA, RuleTaker, BoardGameQA, and Proofwriter.\\n\\n> We conducted ablation studies revealing that distracting factors and shuffled premises notably impact model accuracy, an area not previously explored in existing benchmarks.\\n\\nThe effect of distraction in deductive reasoning is also demonstrated in BoardGameQA\\n\\n> Our scalable, complex, natural, and diverse FOL dataset enhances LLMs' logical reasoning capabilities , even on out-of-distribution datasets.\\n\\nA verified training dataset could be a good contribution, though I'm not very confident that checking 60 samples would be sufficient to verify the dataset.\\n\\n---\\n\\nIn summary, my main concern about this work remains and some of the critical issues are left unaddressed\\n- It is still unclear what is unique about this dataset. While it seems to be created in a slightly different way than prior work, it is unclear what can people learn from ProverGen about the LLM's reasoning capability that benchmarks fail to reveal as I replied above.\\n- Also, my concern that \\\"the LLM translated facts and rules that make the statements semantically more natural and diverse; it is not sufficiently demonstrated how this aspect contributes to the novelty of this benchmark\\\" was not addressed. Concretely, while the background story was generated by LLM, it does not align with the chain rule tree generated other than providing a context for rule and fact translation. This dataset is effectively still synthetic as the reasoning chains are not drawn from real-world natural language distribution.\\n\\nI have read other reviewers' comments, and while I partially agree with their views, I still believe the issues I have are critical and potentially overlooked. That said, I'm keeping my score.\"}", "{\"summary\": \"The authors propose a first-order logic reasoning data generation pipeline and introduce a reasoning benchmark/dataset called ProverGen. This pipeline combines symbolic provers with LLMs to generate scalable, diverse, and high-quality data. Mainstream LLMs are evaluated on this benchmark. Additionally, the authors trained a Llama3.1-8B model on a training set produced by the pipeline, and results show that ProverGen enhances both in-distribution and out-of-distribution logic reasoning abilities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written with clear presentation. The authors effectively explain how ProverGen differs from existing FOL reasoning benchmarks.\\n\\n2. The main contribution is a novel top-down generation framework that creates logically coherent and diverse FOL problems, includes varying difficulty levels and two types of distractions, poses meaningful challenges, as SOTA models achieve only 50% accuracy on hard problems.\\n\\n3. The experiements are clear and relatively sound. The authors evaluates multiple SOTA LLMs with both standard and chain-of-thought prompting, demonstrates improvement through model finetuning, shows generalization to out-of-distribution FOL tasks.\\n\\n4. Provides reproducible results and promises to release code and dataset.\", \"weaknesses\": \"1. The scope is relatively limited. Focuses exclusively on first-order logic reasoning, which may not fully represent real-world reasoning scenarios. Lacks evaluation on general reasoning benchmarks (e.g., MMLU, GSM8K, BIG-bench) to assess broader impact of training.\\n\\n2. While in-distribution performance shows significant improvement after finetuning (>30% increase), out-of-distribution gains are marginal (5-8%). The modest OOD improvement suggests that training on ProverGen may not substantially enhance general reasoning capabilities. Questions remain about whether the skills learned from this benchmark can transfer to broader reasoning tasks.\\n\\n3. Lacks detailed analysis of domain distribution in the generated dataset. The diversity of the generated data is not fully revealed.\", \"questions\": \"1. Please discuss the points mentioned in the weakness section.\\n\\n2. In the case study section, are there any statistics showing how many cases improved with the help of training?\\n\\n3. Would changing the generation model from Llama3.1-70B-Instruct to a more advanced model make any difference?\\n\\n4. Is this generation pipeline only helpful for building benchmarks, or can the synthetic dataset be used for future training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer oQpi (2/3)\", \"comment\": \"> Follow-up Q6: The effect of distraction in deductive reasoning is also demonstrated in BoardGameQA\\n\\nBoardGameQA focuses on natural language reasoning with contradictory information, which is a different domain from FOL reasoning. The distractions in BoardGameQA significantly differ from those in FOL reasoning. In BoardGameQA, distractions are contradictory rules, whereas in FOL reasoning, distractions stem from irrelevant or indirectly related rules that are not used in the reasoning chain. Crucially, unlike in BoardGameQA, distractions in FOL reasoning do not contradict the fundamental premises.\", \"below_is_an_example_to_illustrate_the_differences\": \"| | Premises | Distractions |\\n|--------|------------|---------------|\\n| BoardGameQA | All travelers entering Y from X need to show negative covid19 test results. | Travelers visiting Y for less than a month do not require covid19 tests anymore. |\\n| ProverGen | If Sawyer is a performer, then he either has good dance skills or is a musician | If someone either sings opera or plays an instrument, then they are a musician |\\n\\n> Follow-up Q7: A verified training dataset could be a good contribution, though I'm not very confident that checking 60 samples would be sufficient to verify the dataset.\\n\\nIn our paper, we demonstrated that finetuning on the training set enhances the performance of LLMs on both in-distribution and out-of-distribution datasets. This improvement is indicative of the high quality of the training dataset. Should the dataset be of poor quality, performance gains on verified and out-of-distribution datasets would not be observed. Additionally, a manual inspection of 60 randomly selected samples revealed no quality issues, suggesting an error probability of less than 1/60. This further substantiates the robustness and reliability of our dataset.\\n\\n> Concern 1: It is still unclear what is unique about this dataset. While it seems to be created in a slightly different way than prior work, it is unclear what can people learn from ProverGen about the LLM's reasoning capability that benchmarks fail to reveal as I replied above.\\n\\nAs mentioned in our paper, the main contribution is the framework, which is able to create FOL reasoning dataset that has the following four merits: Scalability; Natural and Diverse Language; Symbolic Representations; Faithful Reasoning Chains.\\n\\nOur framework's novelty has been thoroughly explained in our previous responses. If they are still not convincing, we can further clarify this aspect through our results.\\n\\n- We created a dataset that is significantly harder than previous datasets. As mentioned in your review: \\\"Nevertheless, an evaluation dataset could also be a concrete contribution, but in order to be significant, it needs to reveal something new that was neglected and overlooked in other benchmarks, or bring new or significant harder challenges to the table. \\\", this could be a concrete contribution.\\n- Our dataset is devoid of the data contamination issues observed in prior manually created datasets and is designed to evolve alongside advancements in LLMs. This dataset can be updated regularly at minimal cost, effectively preventing data contamination. Additionally, it is model-agnostic, facilitating seamless integration with emerging models as they become available.\\n- Our dataset is the first to comprehensively address the essential criteria (e.g., scalability; natural and diverse language; symbolic representations; faithful reasoning chains) for an effective benchmark in FOL reasoning. It can support a bunch of downstream tasks, such as NL-FOL translation, and tool-based logic problem solving.\\n- Training on the created dataset enhances the performance of LLMs on both in-distribution and out-of-distribution datasets. This improvement is not observed with previous datasets, such as ProofWriter.\"}", "{\"comment\": \"Thank you to the authors for the reply and the extra analysis and statistics provided. I am willing to offer a raise in the \\\"contribution\\\" rating, but the overall rating will remain the same.\"}", "{\"title\": \"Response to Reviewer 64s6 (1/2)\", \"comment\": \"Thank you for your constructive feedback and support for our work. We are grateful that you are interested in our paper. We respond to your questions and suggestions below.\\n\\n> W1: Some details require further clarification. When generating the logic skeleton, how are the facts and rules selected? Are they extracted from the generated background story? Providing more details on how the FOL relationships are incorporated at this step would help the reader better understand the process.\\n\\nThank you for your suggestion. We appreciate the opportunity to clarify the process.\\n\\nWhen generating the logic skeleton, neither facts nor rules are directly extracted from the background story. Instead, the rules are randomly sampled from combinations of 1-2 connectives (see details in our response to Q5 of Reviewer NAgL). Each sampled rule is validated with Prover9 to ensure its capability to deduce the required conclusion effectively.\\n\\nIn contrast, the facts are generated during the Statement Translation by leveraging LLMs. We prompt the LLMs to replace placeholders in the logic skeleton with suitable predicates, while ensuring not to contradict with real-world common sense at the same time. See the prompt in Appendix B.\\n\\nThe primary function of the background story in our framework is to establish the context for the problem. This context guides the LLMs in promoting diversity in the responses. Without this contextual backdrop, LLMs might fail to produce diversified outputs, a phenomenon observed in other domains as well [1].\\n\\nWe hope this explanation clarifies the process. We have incorporated this clarification into our revised paper.\\n\\n[1] Jentzsch, Sophie, and Kristian Kersting. \\\"ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models.\\\" arXiv 2023.\\n\\n> W2 part1: You mentioned that the bottom-up approach in previous work may result in disconnected and contradictory premises. Is it possible to verify this claim with some data?\\n\\nThank you for your insightful questions. We provide a qualitative analysis of why the prior methodology tends to encounter issues with disconnected or contradictory premises.\\n\\nThe bottom-up approach needs to prepare all the facts at the beginning and then merge them carefully using some rules. This process can be challenging as it may encounter some facts that are hard to merge. In the table presented below, the process initiates with F1 (practice_piano) and F2 (healthy_lifestyle). After the first iteration, we encounter challenges in merging F3 (improved_skills) and F4 (fewer_health_problem).\\n\\n| Rules | Natural language |\\n|----------------------------------------------------|---------------------------------------------------------------------------------|\\n| F1 (practice_piano) \\u2192 F3 (improved_skills) | If Jack practices piano every day, then his skills will greatly improve |\\n| F2 (healthy_lifestyle) \\u2192 F4 (fewer_health_problem) | Anyone who has a healthy lifestyle is likely to experience fewer health problem |\\n\\nIn contrast, our top-down method starts with a final goal and back-propagate it consecutively, ensuring each step naturally connects to the next. This approach prevents disconnection by maintaining a clear line of reasoning throughout the process. Moreover, by initiating from the goal, this approach facilitates a more controlled distribution of the goal's truth value.\\n\\n> W2 part2: Besides, regarding the issue of self-contradictory premises, since you include distracting premises, how do you ensure that these distractions do not lead to contradictory conclusions? While you mention that the distractions do not directly affect the core premises needed for the final conclusion, could they potentially introduce indirect contradictions?\\n\\nIn our framework, we employ two types of distractions. The first type of distractions are created by altering subject names. Since these names are generic (e.g., common human or animal names without specific meaning), they do not inherently introduce contradictions. The second type of distractions arise during the logic skeleton generation stage. Our symbolic prover ensures that such distractions only lead to an \\\"Uncertain\\\" status for related facts in the reasoning tree (e.g., f1, f4 in Figure 1), thereby protecting the integrity of the core premises.\\n\\nWhile we acknowledge the theoretical possibility of indirect contradictions, our manual review of 60 examples (20 examples from each subset) from our dataset revealed no such instances. This suggests that the likelihood of potential indirect contradictions is low, approximately less than 2% based on our sample.\\n\\nWe are committed to further investigating this issue and will incorporate more robust checks in future iterations of our work. Thank you once again for highlighting this important consideration.\"}", "{\"title\": \"Response to Reviewer 64s6\", \"comment\": \"Thank you for your quick response. Yes, the dataset is designed such that all necessary information is explicitly provided in the premises. This ensures that the target FOL reasoning can be performed without relying on commonsense knowledge or additional assumptions.\"}", "{\"title\": \"Response to Reviewer NAgL\", \"comment\": \"Thank you for your positive feedback. We are more than willing to provide answers to your questions.\\n> Follow-up Q1: On experiments with corrupted dataset without universal rules\\n\\nThank you for your insightful analysis. Testing the shortcut problem in a straightforward manner is indeed challenging. Using rules that are out-of-distribution from the model's training data is ideal. However, since LLMs are trained on extensive, non-public datasets that encompass a wide range of real-world knowledge, it's difficult to determine if a rule is truly OOD. Creating fictional rules is a viable alternative, but it's crucial to ensure they don't conflict with the existing knowledge in LLMs. Representing rules and facts with fictional symbols might be a solution, though this shifts the problem from natural language reasoning to symbolic reasoning, which is a different domain.\\n> Follow-up Q2: On the coverage of proof generation. When generating logical forms, are compositional forms generated?\\n\\nYes, our framework does generate compositional forms, including examples like $\\\\( A \\\\land (B \\\\land C) \\\\)$, $\\\\( A \\\\land (B \\\\lor C) \\\\)$, $\\\\( A \\\\lor (B \\\\land C) \\\\)$, and $\\\\( A \\\\lor (B \\\\lor C) \\\\)$. However, to ensure clarity and maintain interpretability when translating these forms into natural language, we have intentionally limited the coverage to combinations involving up to two connectives. This decision helps mitigate potential ambiguities that can arise with more complex nested structures. \\n> Follow-up Q3: On the coverage of proof generation. Does the dataset focus on modus ponens? Or are other deduction rules also generated?\\n\\nThank you for your insightful questions. In our framework, we generate a variety of deduction rules beyond modus ponens. Specifically, we include both $\\\\(A \\\\oplus B\\\\)$ and $\\\\(A \\\\lor B\\\\)$. It's important to highlight that $\\\\(A \\\\lor B\\\\)$ can be transformed into $\\\\(\\\\lnot A \\\\rightarrow B\\\\)$, effectively aligning it with modus ponens in terms of inference. The unique rule in our approach is $\\\\(A \\\\oplus B\\\\)$, which is equivalent to $\\\\((A \\\\rightarrow \\\\lnot B) \\\\land (\\\\lnot A \\\\rightarrow B)\\\\)$. \\n\\nWe intentionally exclude rules like $\\\\(A \\\\land B\\\\)$ because they simply present two facts rather than facilitate reasoning. However, we incorporate such rules when they serve as goals in FOL problems. For example, if the context of a problem states that both $A$ and $B$ are true, the objective might be to deduce the truth value of $\\\\(A \\\\land B\\\\)$.\\n\\nWe truly appreciate your insightful feedback and welcome further discussions.\"}", "{\"title\": \"Response to Reviewer BNmr (2/2)\", \"comment\": \"> Q1: Please discuss the points mentioned in the weakness section.\\n\\nPlease see our responses above.\\n\\n> Q2: In the case study section, are there any statistics showing how many cases improved with the help of training?\\n\\nWe have counted the number of improved cases after training in each dataset. See the table below.\\n\\n\\n| | ProverGen-Easy | ProverGen-Medium | ProverGen-Hard | ProntoQA | ProofWriter | FOLIO |\\n|---------------|:---------------|:-----------------|:---------------|:---------|-------------|-------|\\n| # Improvement | 107 | 220 | 173 | 44 | 53 | 7 |\\n| Test Set Size | 500 | 500 | 500 | 500 | 600 | 140 |\\n\\n> Q3: Would changing the generation model from Llama3.1-70B-Instruct to a more advanced model make any difference?\\n\\nWe conjecture that changing the generation model from Llama3.1-70B-Instruct to a more advanced model would make a difference. Our framework is designed to be model-agnostic, allowing seamless integration of newer and more capable models as they become available. This adaptability means that as model capabilities advance, our dataset can benefit from enhanced quality and diversity of generated data. \\n\\n> Q4: Is this generation pipeline only helpful for building benchmarks, or can the synthetic dataset be used for future training?\\n\\nThe generation pipeline we propose is indeed beneficial for benchmarking as well as training. As demonstrated in Section 5 of our paper, the synthetic datasets we create can be effectively utilized for training, leading to improved performance on both in-distribution and OOD evaluations. \\n\\nWe greatly appreciate the reviewer's feedback and welcome further discussions.\"}", "{\"title\": \"Response to Reviewer oQpi (1/2)\", \"comment\": \"Thank you for your comments and constructive suggestions. We address specific weaknesses and questions below.\\n\\n> W1: As mentioned above, the creation of this dataset largely resembles that of ProntoQA. The main difference is the LLM translated facts and rules that make the statements semantically more natural and diverse. However, it is not sufficiently demonstrated how this aspect contributes to the novelty of this benchmark.\\n\\nThank you for your feedback. We appreciate the opportunity to clarify the distinctions of our dataset compared to ProntoQA, beyond the use of LLM translation.\\n1. **Generation Process**: Our framework employs a top-down approach to generate logic skeletons, contrasting with ProntoQA\\u2019s reliance on predefined logic templates. This method avoids the risk of generating disconnected or contradictory premises, ensuring more coherent datasets.\\n2. **Symbolic Prover Integration**: Our framework includes a symbolic prover, allowing for comprehensive coverage of possible logic expressions and the generation of diverse reasoning trees. In contrast, ProntoQA's approach is limited to predefined reasoning paths, which restricts the diversity of reasoning trees.\\n3. **Complexity Control**: Our framework offers extensive customization of reasoning steps and complexity, which is not available in previous datasets including ProntoQA.\\nWe believe these aspects collectively contribute to the novelty and significance of our work. We are pleased to note that these contributions have been recognized by the other three reviewers.\\n\\n> W2: It is unclear what limitations or insights of LLMs are revealed by this dataset that others could not. This, together with the quality and significance issues to be discussed below, significantly undermine the novelty of this work.\\n\\nThank you for your feedback. Our work highlights the limitations of SOTA models in handling FOL problems requiring long reasoning chains and complex rules, as evidenced in Table 2. Unlike existing datasets, ours introduces specific distractions that significantly impair LLMs' reasoning capabilities (see Table 3). These insights contribute to a deeper understanding of LLM limitations and emphasize the dataset's novelty.\\n\\n> W3: Some FOL reasoning cases are potentially not covered by this benchmark:Judging from the pipeline and examples, the \\\"goal\\\" fact is an instantiated literal with unary predicate such as Elegant(Sawyer), and Swift(jack), but it does not cover cases with binary predicates or those with higher-order of arity, such as FriendOf(Amy, Bob). It also does not cover cases of composite facts, such as FriendOf(Amy, Bob) \\\\land Person(Bob)\\n\\nWe apologize for any confusion. The \\\"goal\\\" in our framework indeed accommodates both facts and rules, as noted in Section 3.1: \\\"The goal of FOL reasoning is to determine whether a given goal G (which can be a fact or a rule) is ...\\\". This encompasses unary, binary, and higher-order predicates, as well as composite facts like FriendOf(Amy, Bob) \\\\land Person(Bob). \\n\\n> W4: Lack of dataset quality check:\\n> - There lacks a check on whether the translated NL statement actually align with the ground-truth FOL rule\\n> - Furthermore, the translated universal rule is only checked by an LLM on whether it aligns with commonsense. This could be noisy as the LLM can hallucinate.\\n> - Without a quantitative measure on the quality of the translation, it is difficult to assess the dataset quality. At least one should provide the accuracy of the translation on a small held-out set that has manual annotations.\\n\\nThank you for your suggestion. We included quality control in our translation process, as detailed in our response to W4 for Reviewer 64s6. We manually verified translations on 60 instances sampled from our benchmark and found no errors, demonstrating the effectiveness of our quality control measures.\\n\\nAdditionally, the training part of the paper also serves as quality checking. Finetuning on the generated dataset enhances the performance of LLMs on both in-distribution and OOD datasets, indicating the relatively high quality of the generated data.\\n\\nDespite the above quality control processes and finetuning experiments, we agree that our framework can be further improved by introducing more advanced quality control processes. We will explore more about it in our future work.\"}", "{\"comment\": \"Dear Reviewer oQpi,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our work! \\n\\nAs the discussion period comes to an end, we would deeply appreciate it if you could take a moment to review our responses and let us know if they adequately address your concerns and questions.\"}", "{\"title\": \"Response to Reviewer NAgL (2/2)\", \"comment\": \"> Q4: Footnote 3: PrOntoQA avoids using real-world concepts to ensure the generated sentences do not contradict with real-world knowledge (i.e. the \\\"fictional ontology\\\" setting).\\n\\nThank you for pointing it out. We have revised Footnote 3 as follows:\\n\\n\\\"Both ProofWriter and PrOntoQA generate natural language expressions by using predefined templates for each logical rule... PrOntoQA intentionally avoids real-world concepts to prevent conflicts with real-world knowledge. They also evaluate LLMs' behavior on examples that contain facts and rules that are consistent (or inconsistent) with the real-world knowledge. In contrast, ProofWriter does not incorporate such a mechanism.\\\"\\n\\n> Q5: What are the set of possible deduction rules from which proof steps are sampled? Are deduction rules involving universal and existential quantifiers generated? More broadly, what are the completeness properties of this proof skeleton generation procedure?\\n\\n**Coverage of generated deduction rules:** Our rules encompass seven fundamental FOL symbols, including four connectives (\\u2227, \\u2228, \\u2192, \\u2295), two quantifiers (\\u2200, \\u2203), and negation (\\u00ac). During logic skeleton generation, deduction rules are sampled from combinations of these connectives, as listed below:\\n[A \\u2192 B, A \\u2295 B, A \\u2228 B, A \\u2227 B, A \\u2227 (B \\u2227 C), (A \\u2227 B) \\u2192 C, A \\u2227 (B \\u2228 C), A \\u2228 (B \\u2227 C), (A \\u2228 B) \\u2192 C, A \\u2228 (B \\u2228C), (A \\u2295 B) \\u2192 C , A \\u2192 (B \\u2295 C), A\\u2192 (B \\u2228 C), A \\u2192 (B \\u2227 C)]\\n\\nEach rule initially includes \\u2200. If contradicted by real-life common sense judged by LLM, \\u2200 is replaced with \\u2203 to reduce contradictions. Negation (\\u00ac) is typically used within the rule\\u2019s fact.\\n\\n**Deduction rules not generated:** (1). Rules with more than two connectives are excluded due to complexity and ambiguity. (2) Rules like [(A \\u2227 B) \\u2295 C, A \\u2228 (B \\u2295C), (A \\u2295 B) \\u2228 C, (A \\u2295 B) \\u2227 C, A \\u2192 (B \\u2192 C)] are also excluded due to ambiguous translations in our practice.\\n\\n> Q6: Line 303: \\\"we opt to the specific rule\\\" -> \\\"we opt for the specific rule\\u201d\\n\\nWe have corrected it in the PDF.\\n\\n> Q7: Have the authors experimented with more than 2 few-shot examples in the prompt? If so, were there any significant differences in behavior? \\n\\nWe have conducted experiments using 5-shot examples in both Direct and CoT settings. The results are detailed below, with the change compared to 2-shot are presented in parentheses.\\n\\n**Direct Prompting:**\\n| | ProverGen-Easy | ProverGen-Medium | ProverGen-hard | ProntoQA | ProofWriter | FOLIO | Avg $\\\\Delta$ |\\n|----|:----|:----|:-----|:------|-----|-----|:----|\\n| GPT-4o | 85.20 (-2.00) | 68.40 (-0.20) | 43.80 (-2.40) | 88.20 (-3.60) | 57.33 (+1.00) | 72.86 (+5.00) | -0.37 |\\n| Claude-3.5-Sonnet | 93.20 (+8.20) | 76.80 (+8.60) | 51.00 (+8.20) | 85.20 (-3.40) | 50.33 (-4.67) | 74.29 (-3.56) | +2.23 |\\n| Llama3.1-8B-Instruct | 64.40 (+17.80) | 41.80 (-1.20) | 38.00 (-1.00) | 58.00 (+7.60) | 44.50 (+0.70) | 46.43 (-7.86) | +2.67 |\\n| Llama3.1-70B-Instruct | 82.80 (+0.80) | 64.40 (+0.20) | 49.20 (+1.60) | 74.00 (-6.60) | 51.33 (+1.00) | 69.29 (+1.43) | -0.26 |\\n| Mistral-7B-Instruct | 57.40 (+0.60) | 48.40 (+1.60) | 37.40 (+0.20) | 52.40 (+2.40) | 47.83 (+5.50) | 52.86 (-1.43) | +1.48 |\\n| Mistral-Large-Instruct | 85.00 (+0.40) | 71.40 (+2.20) | 53.6 (+4.00) | 67.40 (-3.60) | 63.50 (+3.17) | 76.43 (-0.71) | +0.91 |\\n| Mixtral-8x22B-Instruct | 77.60 (+2.20) | 58.00 (+0.60) | 39.60 (+0.60) | 60.00 (-5.20) | 40.50 (+0.33) | 73.57 (-0.72) | -0.37 |\\n\\n**CoT Prompting:**\\n\\n| | ProverGen-Easy | ProverGen-Medium | ProverGen-hard | ProntoQA | ProofWriter | FOLIO | Avg $\\\\Delta$ |\\n|-----|:------|:-----|:-----|:-----|:----|----|----|\\n| GPT-4o | 95.00 (+0.80) | 81.20 (+1.80) | 55.00 (+5.00) | 99.20 (-0.80) | 71.00 (+3.67) | 74.29 (+2.15) | +2.10 |\\n| Claude-3.5-Sonnet | 92.00 (-3.20) | 81.60 (-2.00) | 61.00 (+4.60) | 95.80 (-3.40) | 75.50 (-0.83) | 82.86 (+2.15) | -0.45 |\\n| Llama3.1-8B-Instruct | 80.60 (+5.00) | 45.40 (-1.20) | 31.20 (-2.40) | 74.80 (-4.80) | 58.67 (+1.84) | 57.86 (-5.68) | -1.21 |\\n| Llama3.1-70B-Instruct | 92.60 (+2.20) | 73.40 (+0.20) | 50.20 (+3.40) | 92.80 (-2.60) | 67.00 (-4.17) | 75.00 (+0.71) | -0.04 |\\n| Mistral-7B-Instruct | 65.60 (-6.40) | 50.20 (-0.80) | 38.00 (-3.80) | 65.80 (+4.60) | 48.33 (+2.33) | 63.57 (-0.01) | -0.68 |\\n| Mistral-Large-Instruct | 94.40 (+1.80) | 77.60 (+1.80) | 56.60 (+4.40) | 99.00 (+0.40) | 77.67 (+4.17) | 82.86 (-0.71) | +1.98 |\\n| Mixtral-8x22B-Instruct | 91.00 (+3.40) | 73.80 (+7.00) | 50.20 (+2.60) | 86.20 (+6.60) | 59.67 (+2.00) | 72.14 (-1.43) | +3.36 |\\n\\nOur findings indicate that increasing the number of examples from 2-shot to 5-shot did not consistently enhance the performance of LLMs. This variability in outcomes may be attributed to differences in the models' in-context learning capabilities, which can affect how effectively they utilize additional examples. We have added the results in Appendix G.\\n\\nWe greatly appreciate the reviewer's feedback and welcome further discussions.\"}", "{\"summary\": \"The paper proposes an automatic framework for generating high-quality datasets that adhere to First-order Logic principles, while also being scalable and diverse. The pipeline consists of three stages: Background Story Generation, Logic Skeleton Generation, and Statement Translation. Additionally, the framework introduces distracting premises into the dataset to enhance the comprehensiveness of the benchmark. Experiments show that state-of-the-art LLMs struggle with these logical reasoning tasks, and fine-tuning LLMs on this dataset leads to greater improvement compared to previous logical reasoning benchmarks. An ablation study is conducted to demonstrate the necessity of including distracting factors.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"I also work on logical reasoning, and I personally really like this paper. It addresses some key limitations of the previous benchmarks.\\n\\n(1)\\tFor the first time, this paper proposes an automatic pipeline that fully encompasses First-order Logic (FOL) relationships while being scalable and faithful. Additionally, it offers more natural and diverse language compared to previous benchmarks, as well as symbolic language, and includes a complete reasoning chain.\\n\\n(2)\\tExperiments demonstrate that this benchmark poses a significant challenge to state-of-the-art LLMs.\\n\\n(3)\\tFine-tuning using this new dataset results in greater improvements compared to previous logical reasoning datasets, highlighting the advantages of the dataset.\\n\\n(4)\\tIt reduces the issues of disconnection and contradiction present in previous benchmarks, making the evaluation more reliable.\", \"weaknesses\": \"I don't see any major weaknesses in this paper, but it could benefit from improvements in the following areas:\\n\\n(1)\\tSome details require further clarification. When generating the logic skeleton, how are the facts and rules selected? Are they extracted from the generated background story? Providing more details on how the FOL relationships are incorporated at this step would help the reader better understand the process.\\n\\n(2)\\tYou mentioned that the bottom-up approach in previous work may result in disconnected and contradictory premises. This is a crucial point, as I have also encountered such cases while working on symbolic reasoning datasets. Is it possible to verify this claim with some data? For example, you could show the proportion of disconnected and contradictory cases in datasets like ProofWriter. I understand that, intuitively, the top-down method should generate fully connected logic. So alternatively, you could provide a qualitative analysis of why prior methodology tends to have this problem and how your method effectively addresses it. Besides, regarding the issue of self-contradictory premises, since you include distracting premises, how do you ensure that these distractions do not lead to contradictory conclusions? While you mention that the distractions do not directly affect the core premises needed for the final conclusion, could they potentially introduce indirect contradictions?\\n\\n(3)\\tIt might be helpful to emphasize the importance of reducing disconnected and contradictory cases in your contribution, as they hinder the reliability of the evaluation.\\n\\n(4)\\tGiven the probabilistic nature of LLMs, the benchmark could be further improved by implementing a quality control process, particularly in the stages of generating the logic skeleton and translating FOL into natural language.\", \"questions\": \"(1)\\tHow are the facts and rules selected when generating the logic skeleton, and how do you incorporate the full set of FOL relationships at this stage?\\n\\n(2)\\tDoes the LLM introduce any errors when translating FOL statements into natural language, as well as during the Logic Skeleton Generation?\\n\\n(3)\\tIs there any invalid case generated and is there a quality control process in place to filter out invalid cases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a method to generate first order reasoning QA dataset to evaluate the reasoning abilities of LLMs. Whether LLMs indeed learn to reason has been an important topic for at least 2-3 years. Here, there are a few main considerations. First, how to truly verify whether a LLM reasons. Second, is the result generally applicable to some other datasets. Third, can the dataset be beneficial to improve LLM reasoning. This paper in general is well written and easy to follow. And technically, this paper does a solid answer to all of the three main considerations above. The proposed method is novel and generative in its nature, which potentially can alleviate the problem of data pollution. Furthermore, the generated proof tree is guaranteed to be correct. And the OOD-results is somewhat encouraging. In terms of the limitations, I would encourage the authors to further expand the dataset and include more comparison with more LLMs and do a more comprehensive analysis with other datasets that also highlight reasoning. The comparison could provide more insights.\", \"additional_comments_on_reviewer_discussion\": \"The most important issue in the discussion period is whether the proposed benchmark is novel. Due to its size, it clearly could have its limitation, and a reviewer (which I agree with) raises this concern. Yet, its proposed method is principled. And its analysis accompanied with its OOD experiments help make a case that this dataset could be generally useful. Because of this, I believe the pros (rather than cons) are more shared among more reviewers.\"}", "{\"title\": \"Response to Reviewer 64s6 (2/2)\", \"comment\": \"> W3: It might be helpful to emphasize the importance of reducing disconnected and contradictory cases in your contribution, as they hinder the reliability of the evaluation.\\n\\nThank you for your advice. We have incorporated it into the contributions in our paper.\\n\\n> W4: Given the probabilistic nature of LLMs, the benchmark could be further improved by implementing a quality control process, particularly in the stages of generating the logic skeleton and translating FOL into natural language.\\n\\nThank you for your insightful feedback. We appreciate the opportunity to clarify our quality control processes.\\n\\n1. **Logic Validation:** Our framework includes a robust logic validation step. For each instance, we input both core premises and distractions into the symbolic prover to ensure they correctly deduce the conclusion's truth value.\\n2. **Conflicts Resolution:** During rule translation, we check for previously used predicates to avoid redundancy and potential conficting facts. Additionally, we utilize LLMs to assess whether the generated universal rules align with real-world knowledge. In cases of conflict, we opt for specific rules instead.\\n3. **Translation Quality Control:** We apply a heuristic method to ensure that all involved entities appear in both symbolic expression and natural language expression. For example, when translating \\\"poet(Sawyer)\\\" into \\\"Sawyer is a poet,\\\" we verify that both the name \\\"Sawyer\\\" and the predicate \\\"poet\\\" are present in the translation.\\n\\nAdditionally, the training part of the paper also serves as quality checking. Finetuning on the generated dataset enhances the performance of LLMs on both in-distribution and OOD datasets, indicating the relatively high quality of the generated data.\\n\\nDespite the above quality control processes and finetuning experiments, we agree that our framework can be further improved by introducing more advanced quality control processes. We will explore more about it in our future work.\\n\\n> Q1: How are the facts and rules selected when generating the logic skeleton, and how do you incorporate the full set of FOL relationships at this stage?\\n\\nPlease see our response to W1. For the coverage of FOL rules, please see our response to Q5 of reviewer NAgL.\\n\\n> Q2: Does the LLM introduce any errors when translating FOL statements into natural language, as well as during the Logic Skeleton Generation?\\n\\nRegarding the Logic Skeleton Generation, no errors are introduced as this process is entirely handled by the symbolic prover, ensuring its accuracy. However, during the translation of FOL statements into natural language, there is a possibility that the LLM may introduce errors. To mitigate this, as stated in our response to W4, we have implemented heuristic quality control processes to minimize errors. \\n\\nTo further evaluate the quality of our generated data, we performed a manual inspection (same as in our response to W2 part2) and did not find any instance with translation errors. This suggests that the likelihood of potential translation errors, estimated to be less than 2%.\\n\\n> Q3: Is there any invalid case generated and is there a quality control process in place to filter out invalid cases?\\n\\nPlease see our response to W4 and Q2.\\n\\nWe greatly appreciate the reviewer's feedback and welcome further discussions.\"}", "{\"summary\": \"The authors present a new method to generate first-order reasoning QA data to evaluate the reasoning abilities of LLMs. For each example, they utilize a synthetic generative process to produce the skeleton of the proof tree. They use an LLM to generate the subject and predicates of the example, where a seed subject and topic are sampled from public datasets of names and WordNet, respectively. The use of the LLM facilitates the generation of logical facts and rules that are consistent with the real-world and ensures the generated natural language sentences are linguistically diverse and realistic, in contrast with other synthetic data generation pipelines which use templates. Their generated dataset, called ProverGen, is demonstrated to be effective in measuring the reasoning abilities of LLMs.\\n\\nInterestingly, the authors also show that fine-tuning models on ProverGen improves their OOD reasoning capabilities on datasets such as FOLIO and PrOntoQA.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The authors present a pipeline approach for generating QA data to test the reasoning abilities of LLMs.\", \"The natural language of the text in the examples generated by ProverGen are linguistically diverse and realistic, thanks to the use of an LLM in converting logical form into natural language.\", \"The authors ensure the proof tree of each example is correct by utilizing a symbolic prover (Prover9).\"], \"weaknesses\": [\"Some of the comparisons to previous work are not completely accurate (see questions below).\", \"The coverage of the output examples is not well described (i.e., the set of possible output proof skeletons, the set of deduction rules used for generation, etc).\", \"Since the described rules are specifically designed to be consistent with the LLM's background knowledge, the model can exploit background knowledge as a heuristic when solving examples from ProverGen.\"], \"questions\": \"I provide more detailed questions and comments below. The paper is well-written with only a small number of grammatical errors.\", \"table_1\": \"While it is true that the provided data CSVs for PrOntoQA doesn't contain logical form annotations for each example, they do provide code to parse any sentence in the dataset into FOL logical forms (since each example was generated by converting logical forms into natural language sentences).\", \"line_86\": \"\\\"ProverGen\\u2019s difficulty setting demonstrates a fine-grained complexity\\\"\\n I understand the intended meaning of this phrase thanks to the part of the sentence that follows this phrase, but this phrase itself is somewhat difficult to understand on its own.\\n\\nIt would also be good to mention the disadvantage of data contamination for manually-annotated datasets. In addition, a discussion of how the proposed dataset relates to the problem of data contamination would be welcome.\", \"footnote_3\": \"PrOntoQA avoids using real-world concepts to ensure the generated sentences do not contradict with real-world knowledge (i.e. the \\\"fictional ontology\\\" setting). They also provide true and false ontologies to specifically LLM behavior on examples that contain facts/rules that are consistent (or inconsistent) with the real-world.\\n\\nSection 3.2.2: What are the set of possible deduction rules from which proof steps are sampled? Are deduction rules involving universal and existential quantifiers generated? I assume since the authors claim ProverGen has coverage over all connectives and quantifiers in FOL, that there is at least one possible deduction rule for each connective and quantifier. More broadly, what are the completeness properties of this proof skeleton generation procedure? What are the kinds of proofs that can and cannot be generated?\", \"line_303\": \"\\\"we opt to the specific rule\\\" -> \\\"we opt for the specific rule\\\"\\n\\nHave the authors experimented with more than 2 few-shot examples in the prompt? If so, were there any significant differences in behavior?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces ProverGen, a synthetically generated benchmark for deductive reasoning evaluation, which consists of 1.5K examples. ProverGen resembles prior work such as ProntoQA, where it generates a ground-truth reasoning path and then converts it into a deductive reasoning problem. On top of this, the authors use LLM to translate the rules and facts into NL statements based on a LLM generated background facts, making the problem more natural.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"see below\", \"weaknesses\": \"## Novelty\\n\\nAs mentioned above, the creation of this dataset largely resembles that of ProntoQA. The main difference is the LLM translated facts and rules that make the statements semantically more natural and diverse. However, it is not sufficiently demonstrated how this aspect contributes to the novelty of this benchmark. It is unclear what limitations or insights of LLMs are revealed by this dataset that others could not. This, together with the quality and significance issues to be discussed below, significantly undermine the novelty of this work.\\n\\n\\n## Quality\\n\\nThere are several issues with the creation of the dataset.\", \"some_fol_reasoning_cases_are_potentially_not_covered_by_this_benchmark\": [\"Judging from the pipeline and examples, the \\\"goal\\\" fact is an instantiated literal with unary predicate such as *Elegant(Sawyer)*, and *Swift(jack)*, but it does not cover cases with binary predicates or those with higher-order of arity, such as *FriendOf(Amy, Bob)*. It also does not cover cases of composite facts, such as *FriendOf(Amy, Bob) \\\\land Person(Bob)*\"], \"lack_of_dataset_quality_check\": [\"There lacks a check on whether the translated NL statement actually align with the ground-truth FOL rule\", \"Furthermore, the translated universal rule is only checked by an LLM on whether it aligns with commonsense. This could be noisy as the LLM can hallucinate.\", \"Without a quantitative measure on the quality of the translation, it is difficult to assess the dataset quality. At least one should provide the accuracy of the translation on a small held-out set that has manual annotations.\"], \"finetuning_results_interpretation\": \"- As mentioned in the appendix, model finetuned on FOLIO is trained with only 1K examples, while the other two has 50K. I'm not sure if one can reliably draw any conclusion by comparing it with the other two as the training data difference is too big.\\n\\n## Clarity\\n\\nThis paper is generally easy to follow.\\n\\n## Significance\\n\\nAnother major concern of mine is the significance of this benchmark.\\n- This dataset contains only 1.5K examples with synthetically generated reasoning chains and NL translation that has no direct verification. While the authors generated a training set for experiments in section 5, it was not presented as part of the contribution and there also lacks direct verification on its quality and alignment. That said, this dataset really can only be used to evaluate an LLM on a specific reasoning task, i.e., deductive reasoning.\\n- Nevertheless, an evaluation dataset could also be a concrete contribution, but in order to be significant, it needs to reveal something new that was neglected and overlooked in other benchmarks, or bring new or significant harder challenges to the table. Unfortunately, this is not sufficiently demonstrated in the paper. Throughout the comparison in Table 1, ProverGen is on par with other benchmarks with only the hard set being somewhat more challenging. But where is the insight? Does the model fail because of an important property that is exclusively tested in ProverGen is missing? What can people learn from ProverGen about the LLM's reasoning capability that benchmarks fail to reveal?\\n\\nThat said, the authors need to address this issue before this work can be considered significant.\", \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Hi authors, thank you for your detailed response. I am satisfied with your explanation and will retain my rating.\", \"i_have_a_clarification_question\": \"Is all the necessary knowledge explicitly listed in the premises to arrive at the desired answer, without requiring the interpretation of commonsense or assumptions? In other words, does the dataset disentangle the target FOL reasoning from commonsense reasoning?\"}", "{\"title\": \"Response to Reviewer BNmr (1/2)\", \"comment\": \"Thank you for your comments and constructive suggestions. We address your concerns and questions below.\\n\\n> W1: The scope is relatively limited. Focuses exclusively on first-order logic reasoning, which may not fully represent real-world reasoning scenarios. Lacks evaluation on general reasoning benchmarks (e.g., MMLU, GSM8K, BIG-bench) to assess broader impact of training.\\n\\nThank you for your thoughtful feedback. Aligned with previous works[1-3], we primarily focused on FOL because it is a fundamental aspect of deductive reasoning. Our results show that LLMs still face challenges with complex FOL tasks, particularly in scenarios involving long-chain reasoning. While benchmarks such as MMLU, GSM8K, and BIG-bench are valuable for evaluating general reasoning abilities, they differ significantly from FOL reasoning. \\n\\nAddressing task generalization across a broader scope, as you suggest, would require systematically addressing challenges like catastrophic forgetting. This would involve preparing a comprehensive set of reasoning-related instruction-tuning data, optimizing data-mixing strategies, and conducting controlled experiments to assess whether incorporating our generated FOL data enhances performance across a broader range of reasoning tasks. Such investigations fall outside the scope of our current research. However, we appreciate this insightful question and acknowledge the importance of broader benchmarks, we seriously consider it and plan to explore it in future studies.\\n\\n[1] Olausson, Theo X., et al. \\\"LINC: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers.\\\" EMNLP 2023.\\n\\n[2] Yang, Yuan, et al. \\\"Harnessing the power of large language models for natural language to first-order logic translation.\\\" arXiv preprint arXiv:2305.15541 (2023).\\n\\n[3] Kimura, Daiki, et al. \\\"Neuro-Symbolic Reinforcement Learning with First-Order Logic.\\\" EMNLP 2021.\\n\\n> W2: While in-distribution performance shows significant improvement after finetuning (>30% increase), out-of-distribution gains are marginal (5-8%). The modest OOD improvement suggests that training on ProverGen may not substantially enhance general reasoning capabilities. Questions remain about whether the skills learned from this benchmark can transfer to broader reasoning tasks.\\n\\nIt is a normal phenomenon that the performance improvements on in-distribution datasets are larger than OOD datasets. However, ProverGen uniquely enhances OOD performance compared to datasets like ProofWriter and FOLIO. As recognized by Reviewer NAgL and 64s6, finetuning on ProverGen improves OOD reasoning on datasets such as FOLIO and PrOntoQA.\\n\\n> W3: Lacks detailed analysis of domain distribution in the generated dataset. The diversity of the generated data is not fully revealed.\\n\\nThank you for your feedback. We have conducted a detailed analysis of the domain distribution within the ProverGen dataset. For each instance, predicates from premises were extracted and categorized using WordNet. The domain of each instance was determined by the majority category of its predicates. Our analysis reveals that the dataset spans a wide range of domains. This diversity ensures broad applicability and robustness in various contexts.\\n\\n\\n| category | count |\\n|---------------|-------|\\n| possession | 237 |\\n| cognition | 219 |\\n| social | 215 |\\n| communication | 201 |\\n| stative | 168 |\\n| change | 100 |\\n| creation | 58 |\\n| contact | 46 |\\n| motion | 45 |\\n| perception | 36 |\\n| consumption | 33 |\\n| competition | 24 |\\n| emotion | 19 |\\n| body | 12 |\\n| weather | 1 |\"}", "{\"title\": \"Rebuttal summary\", \"comment\": \"We sincerely appreciate all the reviewers for their thorough evaluations. The constructive suggestions have greatly enhanced the quality of our work, and their positive feedback is especially inspiring for us.\", \"we_are_pleased_to_note_that_reviewers_have_recognized\": [\"Our paper's success in addressing key limitations of previous benchmarks, such as unnatural and monotonous language, the absence of symbolic language and complete reasoning chains, and disconnected or contradictory premises (Reviewers NAgL, BNmr, 64s6).\", \"The novelty of our framework (Reviewers NAgL, BNmr, 64s6), and the clarity and quality of the paper's presentation (all Reviewers).\", \"The experiments as interesting, clear, and sound (Reviewers NAgL, BNmr).\", \"The high quality of the ProverGen dataset, along with scalability, diversity, and faithfulness (Reviewers NAgL, BNmr, 64s6).\", \"We have carefully addressed the concerns and questions from each reviewer, providing detailed responses. Based on this valuable feedback, we have revised the paper with the following updates (highlighted in red for easy identification):\", \"Discussed data contamination issues in manually-annotated datasets and how our framework addresses them, thanks to Reviewer NAgL's insightful suggestion.\", \"Emphasized the importance of reducing disconnected and contradictory cases, as suggested by Reviewer 64s6.\", \"Introduced a new section detailing the quality control process in our framework, in response to questions from Reviewer 64s6 and oQpi.\", \"Once again, we would like to thank all the reviewers for their time and efforts in helping us enhance the paper!\"]}", "{\"title\": \"Response to Reviewer oQpi (2/2)\", \"comment\": \"> W5: Finetuning results interpretation: As mentioned in the appendix, model finetuned on FOLIO is trained with only 1K examples, while the other two has 50K. I'm not sure if one can reliably draw any conclusion by comparing it with the other two as the training data difference is too big.\\n\\nThe finetuning results for ProverGen and ProofWriter were indeed conducted on 5k instances, not 50k. The training set of FOLIO only has 1k instances, which is why it was trained with only 1k examples. These numbers are clarified in Appendix E.\\n\\nTo ensure fairness in comparison, we optimized training configurations through various hyperparameter experiments, including epochs, learning rates, and data sizes, based on performance on validation sets. Specifically, the FOLIO models were trained for 5 epochs, while the other datasets were trained for only 1 epoch, allowing us to balance the differences in data availability. \\n\\n> W6: This dataset contains only 1.5K examples with synthetically generated reasoning chains and NL translation that has no direct verification. While the authors generated a training set for experiments in section 5, it was not presented as part of the contribution and there also lacks direct verification on its quality and alignment. That said, this dataset really can only be used to evaluate an LLM on a specific reasoning task, i.e., deductive reasoning.\\n\\nThank you for your feedback. Our framework is designed to be scalable, allowing for the generation of additional examples as required. While we did not initially highlight the dataset as a primary contribution, its utility extends beyond mere evaluation. In Section 5, we demonstrate its effectiveness in enhancing LLMs' first-order logic reasoning abilities during training. Additionally, the value of our dataset has been recognized by other reviewers, reinforcing its potential impact.\\n\\n> W7: Nevertheless, an evaluation dataset could also be a concrete contribution, but in order to be significant, it needs to reveal something new that was neglected and overlooked in other benchmarks, or bring new or significant harder challenges to the table. Unfortunately, this is not sufficiently demonstrated in the paper. Throughout the comparison in Table 1, ProverGen is on par with other benchmarks with only the hard set being somewhat more challenging. But where is the insight? Does the model fail because of an important property that is exclusively tested in ProverGen is missing? What can people learn from ProverGen about the LLM's reasoning capability that benchmarks fail to reveal?\", \"here_are_the_key_insights_provided_in_our_work\": \"1. Our study shows that LLMs still struggle significantly with complex reasoning tasks, particularly those involving long chains of reasoning and intricate logic.\\n2. We conducted ablation studies revealing that distracting factors and shuffled premises notably impact model accuracy, an area not previously explored in existing benchmarks.\\n3. Our scalable, complex, natural, and diverse FOL dataset enhances LLMs' logical reasoning capabilities , even on out-of-distribution datasets.\\n\\nAdditionally, other reviewers have recognized the value of our dataset, acknowledging its role in presenting new challenges to current LLMs and offering fresh insights into improving logical reasoning skills.\\n\\nWe greatly appreciate the reviewer's feedback and welcome further discussions.\"}", "{\"title\": \"Response to Reviewer NAgL (1/2)\", \"comment\": \"Thank you for your comments and constructive suggestions. We address specific weaknesses and questions below.\\n\\n> W1: Some of the comparisons to previous work are not completely accurate (see questions below).\\n\\nPlease see our responses to Q1 and Q4.\\n\\n> W2: The coverage of the output examples is not well described (i.e., the set of possible output proof skeletons, the set of deduction rules used for generation, etc).\\n\\n**Proof Skeletons:** We evaluated the number of unique logic skeletons per subset in ProverGen. As reasoning steps increase, the number of unique skeletons also increases, with the hard subset mostly having unique structures.\\n\\n| | easy | medium | hard |\\n|--------------------------|:-----|--------|------|\\n| # Unique proof skeletons | 85 | 221 | 494 |\\n| Total number | 500 | 500 | 500 |\\n\\n**Deduction Rules:** Please see our response to Q5.\\n\\n> W3: Since the described rules are specifically designed to be consistent with the LLM's background knowledge, the model can exploit background knowledge as a heuristic when solving examples from ProverGen.\\n\\nOur approach involves generating rules by replacing placeholders in logic skeletons with predicates, which are created by LLMs with the guidance of fictional background stories. The names and keywords used for creating background stories are generic, minimizing the risk of \\\"shortcuts\\\". \\n\\nTo further investigate the potential use of shortcuts, we conducted an experiment by removing universal rules from 60 randomly selected instances in ProverGen. We evaluated GPT-4 and Llama-3.1-70B-Instruct on this \\\"corrupted\\\" dataset. If the models were relying heavily on inherent knowledge as shortcuts, their performance would remain roughly unaffected despite the absence of universal rules.\\n\\nHowever, our results showed a significant drop in performance, indicating that these models do not heavily rely on background knowledge to solve the problems. This supports our claim that the models are not exploiting background knowledge as heuristics.\\n\\n| | Original | Corrupted | $\\\\Delta$ |\\n|-------------------------------|:---------|-----------|--------|\\n| GPT-4o-Direct | 58.33 | 43.33 | -15.99 |\\n| GPT-4o-CoT | 68.33 | 45.00 | -23.33 |\\n| Llama-3.1-70B-Instruct-Direct | 65.00 | 48.33 | -16.67 |\\n| Llama-3.1-70B-Instruct-CoT | 65.00 | 53.33 | -11.67 |\\n\\nWe have added these results to our revised paper in Appendix C.\\n> Q1: While it is true that the provided data CSVs for PrOntoQA doesn't contain logical form annotations for each example, they do provide code to parse any sentence in the dataset into FOL logical forms (since each example was generated by converting logical forms into natural language sentences)\\n\\nThank you for pointing it out. We have corrected this and updated Table 1 accordingly in the revised PDF.\\n\\n> Q2: Line 86: \\\"ProverGen\\u2019s difficulty setting demonstrates a fine-grained complexity\\\" I understand the intended meaning of this phrase thanks to the part of the sentence that follows this phrase, but this phrase itself is somewhat difficult to understand on its own.\\n\\nWe apologize for any confusion. We've revised it to: \\u201cProverGen's difficulty settings are carefully designed to ensure appropriate complexity\\u201d. \\n\\n> Q3: It would also be good to mention the disadvantage of data contamination for manually-annotated datasets. In addition, a discussion of how the proposed dataset relates to the problem of data contamination would be welcome.\\n\\nThank you for your insightful suggestion. Data contamination is indeed a significant issue in manually annotated datasets, as it is difficult to update them frequently. This limitation can lead to biased evaluations and hinder true generalization due to potential data leakage.\\n\\nOur ProverGen framework addresses this challenge by enabling the generation of new datasets using diverse models and controlled complexity. This approach ensures that the datasets remain fresh and uncontaminated, mitigating the problem of data contamination and supporting more reliable and unbiased evaluations. We have included this discussion in the introduction of our revision.\"}", "{\"title\": \"Updated Manuscript and Response to all reviewers:\", \"comment\": \"We thank all the reviewers for their valuable feedback and thoughtful comments on our paper. We have uploaded a revised PDF of the paper. All revisions have been clearly marked using red font and formatted as [Revision: xxx] to facilitate easy identification.\"}", "{\"title\": \"Response to Reviewer oQpi (3/3)\", \"comment\": \"> Concern2-Part1: Also, my concern that \\\"the LLM translated facts and rules that make the statements semantically more natural and diverse; it is not sufficiently demonstrated how this aspect contributes to the novelty of this benchmark\\\" was not addressed.\\n\\nYes, solely using LLMs to translate facts and rules can not be treated as novel. However, as stated above, it is the framework that is novel (Please see our response to Follow-up Q1 and Follow-up Q2). \\n\\nLLMs is a natural choice in our framework. As the reasoning chains in ProverGen are much more diverse and complex than existing benchmarks, which makes prior methods that use templates to translate logic problems impossible. That's why we use LLMs here. Also, using LLMs could enable a more diverse and natural dataset.\\n\\n> Concern2-Part2: Concretely, while the background story was generated by LLM, it does not align with the chain rule tree generated other than providing a context for rule and fact translation.\\n\\nThe background story serves a crucial role in our framework by providing context, which enriches the LLM's output and prevents repetitive responses when applying the same rule. Please our response to W1 for Reviewer 64s6.\\n\\n> Concern2-Part3: This dataset is effectively still synthetic as the reasoning chains are not drawn from real-world natural language distribution\\n\\nA dataset can be either manually created or synthesized, both have their own merits and drawbacks. Compared to manually crafted datasets, it avoids data contamination problems and can be created at a significantly lower cost. Aparting from previous synthetic datasets, our dataset features more diverse and natural language, intricate logics, symbolic representations, faithful reasoning chains, and controllable complexity, enhancing its value.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
C1wSR50nYf
Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis
[ "Qunzhong Wang", "Xiangguo Sun", "Hong Cheng" ]
In recent years, graph prompting has emerged as a promising research direction, enabling the learning of additional tokens or subgraphs appended to original graphs without requiring retraining of pre-trained graph models across various applications. This novel paradigm, shifting from the traditional "pre-training and fine-tuning" to "pre-training and prompting," has shown significant empirical success in simulating graph data operations, with applications ranging from recommendation systems to biological networks and graph transferring. However, despite its potential, the theoretical underpinnings of graph prompting remain underexplored, raising critical questions about its fundamental effectiveness. The lack of rigorous theoretical proof of why and how much it works is more like a "dark cloud" over the graph prompting area for deeper research. To fill this gap, this paper introduces a theoretical framework that rigorously analyzes graph prompting from a data operation perspective. Our contributions are threefold: **First**, we provide a formal guarantee theorem, demonstrating graph prompts’ capacity to approximate graph transformation operators, effectively linking upstream and downstream tasks. **Second**, we derive upper bounds on the error of these data operations for a single graph and extend this discussion to batches of graphs, which are common in graph model training. **Third**, we analyze the distribution of data operation errors, extending our theoretical findings from linear graph models (e.g., GCN) to non-linear graph models (e.g., GAT). Extensive experiments support our theoretical results and confirm the practical implications of these guarantees.
[ "graph prompting", "graph neural networks" ]
https://openreview.net/pdf?id=C1wSR50nYf
https://openreview.net/forum?id=C1wSR50nYf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wJGTmRnsmu", "vGySuhIDPy", "vFEK6awIaB", "ugBpRjPhz9", "szmYh0QpST", "sHG3RSaiCO", "s9XGvqg7iN", "rnOcX5jNwG", "rOJ5S1iZWP", "qJT68q3rru", "nkmXSD1MDE", "mTy5BeRdGs", "mRrVnlNoxr", "jkhXVuVfPq", "fML5EzNwmq", "fDVPqCIrb6", "e8unynSk6D", "avzpv2WTEA", "Zg9fpuh1ef", "YnwEC31NBZ", "YjGmXWawqw", "YgzKSO2FSp", "X6JM5tAqsj", "WVM2OQdTzl", "WP66HE61KX", "W9I7irm1hA", "VVZfnPNDMg", "UtAztmPHbu", "RhDxcZI2XR", "QmzXEghRwF", "Nd7480DMeC", "LoHm8O9jcE", "J4eLoEzrKG", "HAGlVSKUDW", "DrxWEibnxA", "Dpqgq3TAnF", "BPvh6wbiNH", "7ml3zrTVnQ", "6toPGwHsfk", "3YvA8RAxHl", "1bootYcqvT", "0YxAaQprGP" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730382240362, 1732336774858, 1732263208876, 1732611461803, 1732263763475, 1733169087109, 1732525853339, 1732263130409, 1732263043978, 1732264324459, 1730242122353, 1732773576458, 1733027198177, 1733190184129, 1732263940567, 1732857302892, 1732607899509, 1732858645430, 1730709336239, 1732774340558, 1732507912983, 1732270512767, 1732507101937, 1733021050104, 1732264417635, 1732947259001, 1732264037269, 1733035396372, 1732266412896, 1732336311552, 1732264167026, 1732666258222, 1732846315508, 1732521696056, 1732609041007, 1732958209269, 1733189995169, 1732263405304, 1732263481941, 1732645398654, 1732262952264, 1732263553630 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_sjg4" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_q53a" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_VdLJ" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_q53a" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_sjg4" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_q53a" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_VdLJ" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_q53a" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_sjg4" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_q53a" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_sjg4" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Reviewer_q53a" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ], [ "ICLR.cc/2025/Conference/Submission9080/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This study provides a solid theoretical analysis of graph prompts. The theoretical findings include the capabilities of graph prompts on GCN models with non-linear layers, the error bound of the data operations by graph prompts for both a single graph and batch of graphs, and the error distributions of the data operations by graph prompts. This work also provides empirical studies to confirm these theoretical findings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The theoretical analysis in this study addresses the gap in establishing a theoretical basis for the capabilities of graph prompts with non-linear pretrained models and training on batches of graphs. The theoretical findings demonstrate the capabilities of graph prompts across several typical GNN models, providing detailed error bounds and error distributions. Notably, the authors show that the scale of prompts does not increase linearly with the size of the graph dataset, a positive result that supports the scalability of graph prompts.\", \"weaknesses\": \"The writing could be further improved to assist readers who may not have sufficient background knowledge about graph prompts. It also lacks some explanations, which might lead to misunderstandings among readers. For instance:\\n\\n1. The description of the function $C$ is vague and could be clarified using a specific task, such as binary classification. Additionally, $C$ should be denoted as a function of a certain downstream task.\\n2. The terms $\\\\Phi , \\\\mu , and \\\\lambda$ in Equation (4) are not explained and should be clarified.\\n3. The precondition in Corollary 1 is not specified and should be stated explicitly.\", \"questions\": \"1. Why is the error related to the prompt design in Theorem 5? Intuitively, the term $||C(G)||$ appears to be related to the graph itself and the downstream task, suggesting it may not depend with the prompt design.\\n2. Does the number of non-linear layers affect the error bound in Theorem 5?\\n3. How is $C(G)$ computed in Figure 1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewer Feedback to Author Response\", \"comment\": \"I appreciate the detailed feedback from the authors. However, I am afraid that my concerns are still not addressed. I got some new questions from the authors' feedback, but I first want to confirm the correctness of formulating All-in-One in Section 2 and Lemma 1.\\n\\nIn Section 2, we know that the prompted graph $\\\\mathcal{G}\\\\_\\\\omega$ will have $N+k$ nodes, i.e., $N$ original nodes and $k$ token nodes. When we assume only one prompt token, $\\\\mathcal{G}\\\\_\\\\omega$ will have $N+1$ nodes.\\n- NQ1: could the authors specify the shape of matrix $\\\\mathbf{A}\\\\_\\\\omega$ in line 118?\\n\\nIn Lemma 1, we compute $S\\\\_\\\\omega$ and $X\\\\_\\\\omega$ for All-in-One using Equation (23). Here, $S\\\\_\\\\omega$ is the diffusion matrix of $\\\\mathcal{G}\\\\_\\\\omega$ and $X\\\\_\\\\omega$ is the node feature matrix of $\\\\mathcal{G}\\\\_\\\\omega$.\\n- NQ2: what does $l \\\\in \\\\mathbb{R}^{N-1}$ represent in Lemma 1?\\n- NQ3: could the authors specify the shape of $S\\\\_\\\\omega$ and $X\\\\_\\\\omega$ in Equation (23)?\\n\\nThanks, \\nReviewer q53a\"}", "{\"comment\": \"> W3: The experimental section of the paper lacks a comprehensive integration of real-world dataset evaluations within the main text, which limits the visibility and perceived relevance of the findings. By omitting a detailed discussion of these real-world results in the primary analysis, the paper misses an opportunity to contextualize its findings and demonstrate the effectiveness of graph prompting in practical applications. Including these insights in the main text would strengthen the paper\\u2019s overall impact\\n> \\n\\n**R-3:** We thank the reviewer for this comment. Kindly note that we also included real-world dataset evaluation in our Appendix (see Appendix B), from which we can find similar observations.\\n\\n---\\n\\n> Q1: In the experimental section, the authors utilize GCN and GAT as the primary models. Given that graph transformers have emerged as SOTA models, especially in the context of prompting, could you elaborate on your decision to focus on GCN and GAT? How do you believe your findings might differ if applied to graph transformers?\\n> \\n\\n**R-Q1:** We thank the reviewer for this comment.\\n\\n- Kindly note that from the theory perspective, and in the setting of this paper, there is no fundamental difference between GAT and Graph Transformers. The difference is that GAT uses an attention mechanism upon the topological structure and graph transformer uses an attention mechanism upon a complete graph but still with a position mask upon this complete graph to preserve the topological structure for information aggregating. They differ in engineering details but have no significant or fundamental difference in mathematics.\\n- In our theory analysis, we choose GCN and GAT because they are the most classic, simplest, and the most representative models. Graph Transformers have many unnecessary notations for delivering our theory. Therefore, we avoid being trapped in unnecessary math complexity so that we can **clearly present** the nature of graph prompt in theory to our readers, without too many trivial distractions. We believe it is a very important principle for a theory-intensive paper to keep elegant, concise, and simple when reflecting the theory in nature.\\n\\n---\\n\\n> Q2: Based on the theoretical study in the paper, what are the next steps you envision for further research in graph prompting? How can practical application benefit from the theoretical framework? It would be good to have the paper linked to real-world applications.\\n> \\n\\n**R-Q2:** Thank you for your thoughtful comment and for highlighting the importance of connecting our theoretical work to practical applications in graph prompting. We are proud of this work, as it not only advances the theoretical understanding of graph prompting but also redefines our recognition of previous efforts in the field.\\n\\nWe emphasize that the potential and future of graph prompting extend beyond being merely a tuning trick. Instead, we should focus on its capability to learn and implement graph data manipulation strategies. This shift in perspective opens up new avenues for research and practical applications.\", \"regarding_the_next_steps_we_envision_for_further_research_in_graph_prompting\": \"1. **Data-Operation Intensive Applications**: We believe that future research should explore how graph prompts can be leveraged to achieve applications that require complex data operations. For instance, designing advanced graph prompts to integrate and reason over multi-source graph databases could significantly enhance data analysis and interpretation.\\n2. **Cross-Domain Transfer**: Another promising direction is using graph prompts to facilitate cross-domain transfer for graph models. This could enable models trained on one type of graph data to be effectively applied to different domains with minimal adjustments, thereby improving adaptability and efficiency.\\n3. **Enhanced Graph Prompt Design**: We see potential in developing more sophisticated graph prompts that can capture intricate patterns and relationships within graph data. This could lead to better performance in tasks such as network analysis, recommendation systems, and biological modeling.\\n\\nIn our paper, particularly in the Introduction, we have listed several real-world applications where graph prompts are utilized. These include social network analysis, knowledge graph completion, and molecular property prediction. By grounding our theoretical framework in these practical contexts, we aim to bridge the gap between theory and application.\\n\\nWe believe that our theoretical contributions provide a solid foundation for these future explorations and can significantly benefit practical applications by offering insights into the capabilities and limitations of graph prompting techniques.\\n\\nThank you again for your valuable feedback. We are committed to further linking our theoretical findings with real-world applications and will elaborate on these connections in the final version of the paper.\"}", "{\"comment\": \"Thanks for your further questions!\\n\\n\\n> ...then set the loss function as the distance between the embeddings of $pooling(GNN(G))$ and the embeddings of $P_w(G)$ to optimize $P_w$. \\n\\n\\n**R 1**: Given a graph $G$, we change the graph and get $G^{'}$. The loss function is **not the distance between $G$ and $P_w(G)$ but $G^{'}$ and $P_w(G)$.** \\n\\n- Why? Because here graph embedding of $G^{'}$ is treated as $C(G)$. \\n- Why $G^{'}$ is treated as $C(G)$? because just as we mentioned previously: \\n\\n> In this paper, we argue the powerful capability of graph prompt in manipuating graph data. However, is graph manipulation always helpful for any kind of downstream task (or dataset)? That is an open problem needing to be answered and that might be the reason why graph prompts sometimes do not \\\"work well\\\" in some cases as reported by some empirical study. Figuring out this problem deserve a hundred of new research papers and application studies. To this end, we avoid discussing detailed downstream task. Instead, we focus on one \\\"special\\\" task, the performance of which nearly defininetly relate to data manipuation. This task is:\\n\\n> changing part of edges/nodes etc from graph $G$ and get a new graph $G^{\\u2019}$, then we have two graphs, $G$ and $G^{\\u2019}$. The task target is to find the graph embedding of this manipulated graph $G^{\\u2019}$.\\n\\n> In this task, $C(G)$ means graph-level embedding of $G^{\\u2019}$ (ground truth and we can easily get this ground truth by simply calculating the pooling of the manipulated graph). The we consider how well does graph prompt approximate this ground truth and report the error between them. \\n\\n---\\n\\n> ... Theorem 5 does not establish any relationships between the error and the rank; \\n\\n**R2**: Theorem 5 said: \\\"...assume at least one layer\\u2019s parameter matrix is not full rank...\\\". Therefore, figure 1 considered to reflect the error under different ranks.\\n\\n\\n---\\n\\n\\n> 2. Figure 1 does not reveal the relationship between the error and $||C(G)||$; 3. Even though $P_w$ can approximate the specific embeddings of $pooling(GNN(G))$ under the guidance of the distance loss function in this case, it does not imply that it can approximate the desired $C(G)$ with the guidance of the downstream task loss function.\\n\\n**R3**: I think the reviewer might be trapped in a traditional empirical thinking habit. You might assume that an empirical paper should have one experiment to solve one conclusion. However, this is a theory-oriented paper, we should assume that some Theorems might not be directly \\\"demonstrated\\\" by empirical experiments because these theorems have been strictly demonstrated in our Appendix. \\n\\n**Then the question is: How to \\\"demonstrate\\\" the theorem via empirical study?** A mainstream solution is to use several empirical studies to reflect different sides of observations drawn from the theorem. \\n- As for Theorem 5, we reflect some key observations from it not only in Figure 1, please also check in section 5.3, we have three more experimental figures that try to reflect the theorem comprehensively. \\n- As for Figure 1: lines 275-281 said:\\n> Theorem 5 reveals the potential distortion of BG\\u2019s shape when the matrix is not full-rank and the model\\u2019s expressive power is insufficient. This can lead to an increased distance between BG and the transformation do- main DP (G) of GPF or All-in-One prompts. To To confirm thisjudgment, we conducted a quantitative analysis using numercal methods for the case of non-full rank matrices\\n\\n\\n\\nWe also encourage the reviewer to see our detailed procedure in Appendix C, in which we step by step tell our readers how to conduct Figure 1.\"}", "{\"title\": \"Rebuttal to q53a\", \"comment\": \"We appreciate your thoughtful comments, and we feel sorry to see your negative score (\\u3160\\u2038\\u3160) (\\u3160\\u2038\\u3160) (\\u3160\\u2038\\u3160)\\n\\nHowever, we firmly believe that you are such a nice reviewer because **most of your concerns are caused by some minor misunderstandings (especially W1 and W2)**. We felt your strong kindness from your comment like `The studied problem... is a significant topic`, and from your problems that are not tough for us to respond. To this end, we carefully prepared the following rebuttal to address your concerns comprehensively. **Hopefully they will change your mind and encourage you to raise your score, pushing this research forward.**\"}", "{\"comment\": \"Thanks for the detailed rebuttal and additional insights. I will retain my positive score\"}", "{\"comment\": \"Kindly note that $X \\\\in \\\\mathbb{R}^{N \\\\times F }$ here is the natural extension of node attribute matrix $X_{0}\\\\in \\\\mathbb{R}^{N-1 \\\\times F }$ by adding an additional zero vector $ \\\\mathbf{0}_F \\\\in \\\\mathbb{R}^{1 \\\\times F } $ like:\\n\\n\\\\begin{bmatrix} X_{0} \\\\\\\\\\\\\\\\ \\\\mathbf{0}_F \\\\end{bmatrix}\\n \\n\\n\\nthe display of the above equation might not work in this openreview system because it can not support a math array very well. **We have supplemented the above-detailed descriptions in the appendix. Please check in lines 868-872.**\\n\\nThis treatment ensures dimensional consistency while maintaining the conciseness and uniformity of the expression without affecting the final computational results.\"}", "{\"comment\": \"> W2: In Section 4, the theorems are based on the assumption of full-rank weight matrices. It would be helpful to investigate how well the assumption holds in practical applications.\\n> \\n\\n**R-2:** Thank you for your insightful comment regarding the assumption of full-rank weight matrices in Section 4. We appreciate the opportunity to clarify how this assumption holds in practical applications.\\n\\n1. **Prevalence of Full-Rank Matrices in Well-Trained Models**: As mentioned in our paper, well-trained models typically contain full-rank weight matrices. Common initialization techniques such as orthogonal initialization and He initialization ensure that the weight matrices start as full-rank. During training, these matrices tend to maintain their full-rank property due to the nature of gradient-based optimization processes.\\n2. **Expressive Power of Full-Rank Matrices**: Full-rank matrices inherently possess stronger expressive capabilities. Training algorithms aim to optimize the model\\u2019s expressive power to capture complex patterns in data. Therefore, it is intuitive and reasonable to assume that the training process favors the retention of full-rank weight matrices to achieve better performance.\\n3. **Mathematical Justification**: From a mathematical standpoint, the set of non-full-rank matrices has measure zero in the space of all matrices. This is because the determinant function is continuous, and a matrix is singular (non-full-rank) only when its determinant is exactly zero. Consequently, the probability of a randomly initialized matrix being non-full-rank is negligible.\\n\\n**Conclusion**: Under typical conditions and standard training practices, it is reasonable to assume that pre-trained models yield full-rank weight matrices. This assumption is both theoretically sound and practically observed, making it a valid basis for our theorems.\\n\\n1. **Addressing Non-Full-Rank Cases**: We acknowledge that this assumption may not hold in all scenarios. To account for situations where weight matrices might not be full-rank, we have conducted additional analyses in Sections 3 and 5. These sections focus on cases without the full-rank assumption, providing a more comprehensive understanding of the model\\u2019s behavior under different conditions.\\n\\nWe will enhance the final version of the paper by including a more detailed discussion on the validity of the full-rank assumption in practical applications, along with empirical evidence and potential limitations. Thank you again for your valuable feedback. We believe that addressing this point strengthens our work and its applicability to real-world scenarios.\"}", "{\"comment\": \"> W1: The paper offers rigorous theoretical analysis related to data operation perspective, upper bound study, etc. In the meantime, it may lack sufficient contextualization regarding how these error bounds apply in real-world scenarios. A more practical interpretation of the results could help bridge the gap between theory and application.\\n> \\n\\n**R-1:** Thank you for your insightful comment. We agree that bridging the gap between theory and application is essential for advancing practical understanding and utility. In practice, the design and training of prompts often rely on empirical results and heuristics. Our theoretical analysis provides error bounds that can directly inform and guide prompt design and analysis in real-world scenarios. Specifically:\\n\\n- **Theorems 5 and 8** highlight that rank deficiency in the model\\u2019s parameter matrix can lead to information loss in prompts. This means that if a prompt is not performing well, it may be due to the pre-trained model exhibiting a non-full-rank condition\\u2014an experience that has not been previously mentioned in this field. Recognizing this issue allows practitioners to focus on the rank properties of their models to retain essential information from prompts.\\n- **Theorem 6** indicates that for datasets of a certain scale, prompts with limited complexity have a theoretical upper bound on performance. This suggests that to achieve better results, we need to increase the complexity of the prompts. This theorem provides practical guidance on determining the necessary complexity level of prompts relative to the dataset size.\\n- **Theorem 7** offers a method to estimate the required size of a prompt through the value of *\\u03f5*. By calculating *\\u03f5*, practitioners can infer whether the prompt\\u2019s size is sufficient for the dataset in question. If a prompt underperforms on a given dataset, computing *\\u03f5* helps determine whether the issue stems from theoretical limitations (implying the need for a more complex prompt) or from not yet finding the optimal parameters\\u2014thus indicating a need for further training.\\n\\nKindly note that we only list a small part of useful ideas. We are happy to say that our theory serves as a \\u201cspade\\u201d and we encourage future researchers to dig for more insightful gold using this spade. These theoretical insights provide practical tools for prompt design and analysis, helping practitioners understand and overcome performance limitations in real-world applications. By applying our error bounds, one can make informed decisions about model adjustments and prompt complexity to achieve desired outcomes.\\n\\nWe appreciate your feedback and will incorporate a more detailed discussion of these practical implications in the final version of the paper to enhance the connection between our theoretical results and their real-world applications.\"}", "{\"comment\": \"> Q1: The authors take GCNs as linear graph models and GATs as nonlinear graph models based on different aggregation mechanisms. Could the authors provide some previous studies that use such categories? According to my knowledge, nonlinear graph models are typically GNNs with nonlinear activation functions between GNN layers.\\n> \\n\\n**R-Q1:** We thank the reviewer for this interesting problem.\\n\\n- Kindly note that \\u201clinear/non-linear model\\u201d is not a special term for GNN. It is a very natural, very normal, and very comment word mathematically. For example, nonlinear activation functions between GNN layers are a non-linear component in GNN because they translate node features (say *X*) in a non-linear way. Similarly, for the message-passing stage (aggregation mechanisms) there also exists linear/non-linear ways for propagating node messages. For example, GCN has a linear aggregation w.r.t *X* (e.g.\\u00a0*X*\\u2004=\\u2004*AXW*) and GAT has non-linear aggregation. Technically, non-linear graph models contain two components that might indicate this nonlinearity: aggregation and activation.\\n- The big background of this paper is: that nearly all graph models contain nonlinear activation functions, but not every model has a nonlinear aggregation component. Our paper focuses on discussing the impact of linear/non-linear aggregations (GCN and GAT) beyond activation functions, therefore we think using \\u201clinear/nonlinear graph models\\u201d under the context/environment/setting of this paper should be elegant, concise, and clear without ambiguity. (It is a very important principle for a theory-intensive paper).\\n- To further address the reviewer\\u2019s concern, we replace the term \\u201clinear/nonlinear graph models\\u201d with \\u201clinear/nonlinear aggregation graph models\\u201d in any place that we think might cause misunderstanding.\\n\\n---\\n\\n> Q2: What is meaning of SG_w in Theorem 4\\n> \\n\\n**R-Q2:** *SGw* is the prompt subgraph (including prompt token and token structure). The graph prompt changes the original graph *G* to *Gw* by integrating *G* with the prompt subgraph *SGw*. We thank the reviewer for pointing out this confused place and we have updated and added the explanation in our revised paper.\\n\\n---\\n\\n> Q3: What does miu mean in Theorem 5\\n> \\n\\n**R-Q3:** Thank you for your insightful question regarding the meaning of *\\u03bc* in Theorem 5.\\n\\n- In the proof of Theorem 5, we perform an in-equation scaling analysis that allows us to bound the error as the product of two components. The first component depends solely on the model parameters and reflects the model\\u2019s characteristics. The second component is related to the norm of the graph embedding vector *C*(*G*), which is intrinsic to the graph itself.\\n- The parameter *\\u03bc* is a function of the model parameters *\\u03b8* and represents the specific upper bound of the component that depends only on the model. Since this upper bound is uniquely determined by the various parameters of the model, \\u03bc can be regarded as an implicit function of *\\u03b8*.\\n- In the subsequent paragraphs of our paper like equation (4) and Appendix A.3.3), we provide the explicit form of this implicit function *\\u03bc*. Intuitively, deriving *\\u03bc* involves analyzing the expressive power of the model and understanding the corresponding structure of the embedding space.\\n\\nWe hope this explanation clarifies the role of *\\u03bc* in Theorem 5. Please let us know if you have any further questions or need additional clarification.\\n\\n---\\n\\n> Q4: Could the authors provide more details about All-in-One-Plus?\\n> \\n\\nPlease see our response to W3.\"}", "{\"summary\": \"Graph prompting is a popular method to adapt pre-trained graph models for downstream tasks. While many graph prompting methods work well on real-world graph datasets, theoretical analyses on graph prompting methods are still not well explored. This study provides comprehensive analysis of popular graph prompting methods. The authors conduct extensive experiments on both synthetic and real-world datasets to validate the theoretical findings in this study.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The studied problem of theoretical analyses on graph prompts is a significant topic.\\n2. The authors provide experiments on both synthetic and real-world datasets.\", \"weaknesses\": \"1. Some arguments lack supporting evidence. The authors claim that \\u201cThe rest graph prompt designs can usually be treated as their special cases or natural extensions.\\u201d in line 105. However, drawing this conclusion is not straightforward. For example, GPPT [1] designs graph prompts as structure tokens and task tokens, and GraphPrompt [2] designs graph prompts as prompt-based Readout operations. The authors may discuss how to treat these graph prompt methods as special cases or natural extensions of GPF or All-in-one.\\n2. Incorrect formulation of All-in-One. According to Section 2, All-in-One obtains the prompt graph by connecting $k$ learnable prompt tokens with $N$ original graph nodes. However, All-in-one in Lemma 1 adds prompt tokens to node features, which conflicts with Section 2. Since most theoretical analysis in this study is related to All-in-one, the authors should at least provide correct formulation of All-in-One as the basis of this paper.\\n3. The description of All-in-One-Plus in Section 4.1 is missing. The authors should provide the citation of this method and introduce it in the paper.\\n4. Notations are inconsistent in the paper, which confuses readers a lot. For example, the authors use two different notations $l$ and $i$ to represent the layer index of GNNs in the paper, while $l$ is also used to denote a column vector. The authors should use consistent notations to avoid ambiguity.\\n5. Duplicated sentences should be avoided. For example, the paragraph *Model Settings* appears twice in Section 5.1 and Appendix C.\\n\\n[1] Sun, Mingchen, et al. \\\"Gppt: Graph pre-training and prompt tuning to generalize graph neural networks.\\\" Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022. \\n[2] Liu, Zemin, et al. \\\"Graphprompt: Unifying pre-training and downstream tasks for graph neural networks.\\\" Proceedings of the ACM Web Conference. 2023.\", \"questions\": \"1. The authors take GCNs as linear graph models and GATs as nonlinear graph models based on different aggregation mechanisms. Could the authors provide some previous studies that use such categories? According to my knowledge, nonlinear graph models are typically GNNs with nonlinear activation functions between GNN layers.\\n2. What is meaning of $SG_\\\\omega$ in Theorem 4?\\n3. What does $\\\\mu$ mean in Theorem 5?\\n4. Could the authors provide more details about All-in-One-Plus?\\n5. Why do the authors not include the results of GPF-plus and All-in-one-plus in Section 5?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Reviewer q53a\\n\\nWe truly thank you for your kind engagement in our discussion. We kindly inquire if your previous questions have been clarified and please do not hesitate to contact us if you have any further questions or suggestions. \\n\\nWe value your opinion and please do not hesitate to let us know if our previous effort changed your mind. \\n\\nWe are warmly looking forward to hearing the good news from you, and we are also glad to discuss further with you! Please do not hesitate to give us any further feedback at your earliest convenience.\\n\\n\\n\\nKind regards.\"}", "{\"comment\": \"Thank you for your response. However, I remain unconvinced by your explanation.\\n\\nRegarding Theorem 5 and Figure 1, my main concern from my previous comments remains unaddressed: \\u201cEven though $P_w$ can approximate the specific embeddings of $pooling(GNN(G))$ under the guidance of the distance loss function in this case, it does not imply that it can approximate the desired $C(G)$ with the guidance of the downstream task loss function.\\u201d For this reason, I do not believe that Theorem 5 and Figure 1 provide a meaningful understanding of the power of graph prompts.\\n\\nFurthermore, I have additional concerns regarding Theorem 7. Specifically, in Theorem 7, you claim that \\u201cthe eigenvalues of $V$ in datasets often exhibit an exponential decay.\\u201d How can you arrive at this conclusion without knowledge of $C(G)$ ?\\n\\nDue to the earlier misunderstanding, I re-read the paper and re-evaluated it with a new rating. Overall, I believe this paper requires major revisions and a clearer presentation.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"> W1: Some arguments lack supporting evidence. The authors claim that \\u201cThe rest graph prompt designs can usually be treated as their special cases or natural extensions.\\u201d in line 105. However, drawing this conclusion is not straightforward. For example, GPPT [1] designs graph prompts as structure tokens and task tokens, and GraphPrompt [2] designs graph prompts as prompt-based Readout operations. The authors may discuss how to treat these graph prompt methods as special cases or natural extensions of GPF or All-in-one.\\n> \\n\\n**R-1.1. GPPT:** In GPPT, the structure tokens can be treated as some prompt tokens in All-in-One, and the task tokens in GPPT represent node category, which are also some special tokens of All-in-One\\u2019s prompt graph. Their task can be treated as predicting part of token structure between structure tokens and task tokens to achieve node classification.\\n\\nThe above observation is also confirmed by the authors of All-in-One in their paper [1]. We kindly encourage the reviewer to check in section 3.5.1 of their paper. Here we briefly copied their content for your information:\\n\\n> Their work somehow is a special case of our method when our prompt graph only contains isolated tokens, each of which corresponds to a node category.\\n> \\n\\n**R-1.2. GraphPrompt:** In most cases, prompt tokens are expected to be inserted in the node features, which can be treated as some operation at the \\u201c0-th\\u201d layer of the graph models. Our theory discussed how \\u201c[1,end]\\u201d layers of the GNN act with graph prompts. Since graph pre-trained models are frozen in the graph prompting area, we can also extend the discussion by changing the position of the graph prompt to *k*-th layer of the GNN where [1,\\u2006*k*) layers are treated as fixed feature mapping function and (*k*\\u2005\\u2212\\u2005*end*] layers still hold the findings of our theory framework. From this perspective, GraphPrompt, obviously, can be treated as some special case of this paper and our analysis framework is applicable to this model.\\n\\n**R1.3. Other Designs:** For more kinds of graph prompt models, Sun et al.\\u00a0[2] have carefully discussed their inner connections to All-in-One. Please see Section 5.4 of this survey [2] for your information.\\n\\nWe would very much like to add these discussions to our paper. However, these opinions have been carefully discussed in a very comprehensive survey and many detailed papers. So we think it might be a little bloated and redundant for our goal that focus on concisely discussing an elegant theory. **To further address the reviewer\\u2019s concerns**, we cite these mentioned papers in the mentioned sentence \\u201cThe rest graph prompt designs can usually be treated as their special cases or natural extensions\\u201d in our paper.\", \"ref\": \"1. Xiangguo Sun, Hong Cheng, Jia Li, Bo Liu, Jihong Guan. All in One: Multi-task Prompting for Graph Neural Networks. KDD 2023\\n2. Xiangguo Sun, Jiawen Zhang, Xixi Wu, Hong Cheng, Yun Xiong, Jia Li. Graph Prompt Learning: A Comprehensive Survey and Beyond. https://arxiv.org/abs/2311.16534\"}", "{\"comment\": \"Thanks for your clarification.\\n\\nIn the paper of All-in-One, the authors provide only one way to obtain $X_w$ by $\\\\hat{x}_i = x_i+\\\\sum\\\\_{k=1}^{|\\\\mathcal{P}|} w\\\\_{ik} p_k$ with its simplified version. So I think the theoretical analysis should follow the default inserting pattern. The current formulation of the inserting pattern in your submission does not appear in the All-in-One paper, which will make readers hard to follow your analysis.\\n\\nAdditionally, you mentioned that the open code of the All-in-One paper does not conflict with your formulation. I have not checked their implementation yet. Do you mean the inserting pattern in their code is inconsistent with the presented one in the All-in-One paper?\"}", "{\"comment\": \"Dear Reviewer q53a\\n\\nWe truly appreciate your time in reviewing this work! We write to ask whether you have any further questions. Please do not hesitate to let us know since phase 1 will soon come to a close. \\n\\nWe humbly wish you could reconsider our work if the mentioned misunderstandings are clarified. **Your opinion is very important to us and we humbly hope that you would view our work favorably** and consider raising your scores to reflect the improvements made. We carefully checked all your questions and we have found that nearly all your questions/weaknesses are largely neutral without too obvious negative intent. We believe our work deserves a more favorable score if your misunderstandings are clarified. \\n\\nGraph prompts have recently been widely treated as a very promising way at the data level towards more general graph-based AI applications. However, despite its potential, ``the theoretical foundation of graph prompting is nearly empty, raising critical questions about its fundamental effectiveness.`` The lack of rigorous theoretical proof of why and how much it works is more like a **\\u201cdark cloud\\u201d** over the graph prompting area to go further. **And that is why we are truly proud of this work as it advances the theoretical understanding of graph prompting and redefines previous efforts in the field.** \\n\\n\\nCurrently, the research community is filled with too many empirical studies but we urgently need to figure out these foundational theories to support us to go further. Our contributions provide a solid foundation for future explorations and can significantly benefit practical applications by offering insights into the capabilities and limitations of graph prompting techniques.\\n\\n\\nThanks again for your time. We are warmly looking forward to your letter.\\n\\nKind regards\"}", "{\"comment\": \"Dear Reviewer q53a\\n\\nI think you might have some misunderstanding on their paper (or largely caused by their presentation, not ours). If you look at their paper, they said:\\n\\n> We can define the inserting pattern as the **dot product between prompt tokens and input graph nodes, and then use a tailored connection** like\\n$ \\\\mathbf{\\\\hat{x}}_i=\\\\mathbf{x}_i+\\\\sum_{k=1}^{|\\\\mathcal{P}|} w_{ik}\\\\mathbf{p}_{k} $ where $ w_{ik} $ is a weighted value to prune unnecessary connections\\n\\n\\n\\nThe above content truly has some confusing things for readers, so we understand your misunderstanding and we initially have a similar confusion with you. However, after carefully checking their paper, their code, and a **face-to-face online meeting** with the original authors of that paper, we can ensure that:\\n\\nThis equation does not denote how to insert a prompt token to the graph, it presents when (or after) we connect a prompt token to a node (via cross-links), we send this combined graph to a GNN, and this equation is an example how GNN aggregates a node's feature via its neighbors with prompted graph, and the above equation talks how this feature changed when we sent this combined graph into a GNN. All-in-One does not directly change node features but inserts the prompt to the original graph, and the feature changing happens within a GNN.\\n\\n``One More Thing``:\\nWe have realized that **nearly all your misunderstandings and nearly all your proposed weaknesses of our paper, are based on your misunderstanding** of ``THAT paper``, not ours! Instead, our presentation in our paper more clearly presents the nature of All-in-One (We did a better job than that paper's expression). We feel sorry that THAT paper did not clearly present its idea. But, we wish kindly to say: **THIS is not our fault.**\\n\\nWe have carefully checked this issue again and again, with the original authors, with their papers, with their codes, and with everyone we can turn to help. \\n\\n\\nThis year, ICLR is more competitive than ever. We do hope our huge effort could address your question and we are looking forward to your further raising. Feel free to let us know if you have any further questions.\\n\\n\\nKind regards,\"}", "{\"summary\": \"The paper presents a theoretical framework for understanding graph prompting, a method of incorporating additional tokens or subgraphs without requiring retraining of pre-trained graph models over various downstream tasks. The authors introduce a comprehensive theoretical framework that establishes formal guarantees on the effectiveness of graph prompts in approximating various graph transformation operations. They derive upper bounds on the errors introduced by these prompts for individual graph, and further extend the findings across different GNN architectures, including both linear and non-linear models. The empirical study supports the theoretical findings and showcases the practical benefits.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper introduces a comprehensive theoretical framework for graph prompting, significantly advancing the understanding of how and why graph prompts work. Specifically,\\n- There are some key concepts such as \\\"bridge sets\\\" and \\\"\\u03f5-extended bridge sets,\\\" which are pivotal for understanding how graph prompts function. By establishing the existence of a bridge graph for any given input graph, the authors provide a strong theoretical foundation that justifies the use of graph prompts.\\n- The paper provides a thorough and systematic analysis of the errors introduced by graph prompts, establishing upper bounds that offer valuable insights into their performance. By deriving specific error bounds, the authors contribute significantly to the theoretical understanding of graph prompting. \\n\\nThe paper is well-structured, featuring clearly defined notations and formulas that significantly contribute to its readability and comprehension\", \"weaknesses\": \"The paper offers rigorous theoretical analysis related to data operation perspective, upper bound study, etc. In the meantime, it may lack sufficient contextualization regarding how these error bounds apply in real-world scenarios. A more practical interpretation of the results could help bridge the gap between theory and application.\\n\\nIn Section 4, the theorems are based on the assumption of full-rank weight matrices. It would be helpful to investigate how well the assumption holds in practical applications\\n\\nThe experimental section of the paper lacks a comprehensive integration of real-world dataset evaluations within the main text, which limits the visibility and perceived relevance of the findings. By omitting a detailed discussion of these real-world results in the primary analysis, the paper misses an opportunity to contextualize its findings and demonstrate the effectiveness of graph prompting in practical applications. Including these insights in the main text would strengthen the paper\\u2019s overall impact\", \"questions\": \"In the experimental section, the authors utilize GCN and GAT as the primary models. Given that graph transformers have emerged as SOTA models especially in the context of prompting, could you elaborate on your decision to focus on GCN and GAT? How do you believe your findings might differ if applied to graph transformers?\\n\\nBased on the theoretical study in the paper, what are the next steps you envision for further research in graph prompting? How can practical application benefit from the theoretical framework? It would be good to have the paper linked to real-world applications\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer VdLJ\\n\\nWe truly appreciate your time in reviewing this work! We write to ask whether your concerns have been addressed by our previous response. Please do not hesitate to let us know since phase 1 will soon come to a close.\", \"we_noticed_that_you_gave_us_a_very_high_score_at_soundness\": \"3: good | Presentation: 3: good | Contribution: 3: good, but a relatively slightly positive score at Rating: 6: marginally above. May I know if is there anything we can do to further address your concern?\\n\\n\\n\\nWe humbly wish you could reconsider our work if the mentioned misunderstandings are clarified. According to other reviewer scores (8 and 3, respectively), your potential further raising might significantly help us to fight for our acceptance. We believe our work deserves a more favorable score if your misunderstandings are clarified. \\n\\nGraph prompts have recently been widely treated as a very promising way at the data level towards more general graph-based AI applications. However, despite its potential, ``the theoretical foundation of graph prompting is nearly empty, raising critical questions about its fundamental effectiveness.`` The lack of rigorous theoretical proof of why and how much it works is more like a **\\u201cdark cloud\\u201d** over the graph prompting area to go further. **And that is why we are truly proud of this work as it advances the theoretical understanding of graph prompting and redefines previous efforts in the field.** \\n\\n\\nCurrently, the research community is filled with too many empirical studies but we urgently need to figure out these foundational theories to support us to go further. Our contributions provide a solid foundation for future explorations and can significantly benefit practical applications by offering insights into the capabilities and limitations of graph prompting techniques.\\n\\n\\nThanks again for your time. We are warmly looking forward to your letter.\\n\\nKind regards\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your careful examination of our matrix formulations. Let me clarify the dimensional relationships and notation used in Section 2 and Lemma 1.\\n\\n\\n1. **About the $A_w$ Matrix (NQ1)**\\n - Indeed, when adding one prompt token to a graph with N original nodes, the prompted graph's adjacency matrix $A_w$ should be (N+1)\\u00d7(N+1)\\n \\n2. **About Lemma 1 Notation (NQ2 & NQ3)**: To maintain consistency across different prompting methods, we use a slight notation shift:\\n - We denote the original graph size as (N-1) nodes when discussing All-in-One\\n - After adding the prompt node, we get N\\u00d7N matrices\\n - Vector $l$ represents the aggregation weights from the prompt node to the original (N-1) nodes, \\n - Therefore, $S_w$ and $X_w$ in Equation (23) are N\\u00d7N matrices\\n\\nIn our revised version (line 868), we carefully mark this slight shift to our readers and please kindly note that this shift is a normal math trick, which will not impact the math conclusion but can reduce math complexity.\\n\\n3. **Notation Consistency**: While it might seem more intuitive to use (N+1)\\u00d7(N+1) for All-in-One and N\\u00d7N for GPF. We chose to use N\\u00d7N consistently for All-in-One by shifting the base notation (using N-1 as the original node count). This shift is merely notational and doesn't affect the mathematical validity\\n\\n\\nThanks again for your interest in our work! and please feel free to let us know if you have any further suggestions/questions.\"}", "{\"title\": \"To All Reviewers\", \"comment\": \"**Dear Reviewers,**\\n\\nWe would like to express our deepest gratitude for your time and insightful feedback on our manuscript titled \\\"Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis.\\\" Your constructive comments have been invaluable in enhancing the quality and clarity of our work. We are committed to advancing the field of graph prompting and believe that our revisions have significantly strengthened the paper.\\n\\nWe have carefully considered all the comments and have made substantial revisions to address the concerns raised. We highlight part of the changes in red in our revised manuscript. Also, we provide more experiments (https://anonymous.4open.science/r/dgpwadopwta/supplement.pdf) and more open discussion (https://anonymous.4open.science/r/dgpwadopwta/OpenDiscussion.pdf) as expected by reviewer ``VdLJ``, both of which can be accessed from our open code project.\\n\\nWe appreciated Reviewer ``VdLJ`` and Reviewer ``sjg4`` for their positive support to our work (with rating scores **6 and 8**, respectively). We also truly appreciated the comment given by Reviewer ``q53a``, in which we believe most of the concerns arose from misunderstandings, and we have taken steps to clarify.\\n\\n\\n**Significance and Impact of Our Work** \\n\\nWe are truly proud of this work as it advances the theoretical understanding of graph prompting and redefines previous efforts in the field. Our contributions provide a solid foundation for future explorations and can significantly benefit practical applications by offering insights into the capabilities and limitations of graph prompting techniques.\\n- **Theoretical Foundations:** By establishing rigorous guarantee theorems and deriving upper bounds on data operation errors, we provide essential theoretical foundations that were previously lacking in the field.\\n- **Practical Implications:** Our work offers practical tools and guidance for prompt design and analysis, helping practitioners overcome performance limitations and make informed decisions about model adjustments.\\n- **Future Research Directions:** We open new avenues for research, encouraging the development of advanced graph prompts and their application in data-operation-intensive tasks and cross-domain transfer.\\n\\n\\n**Conclusion and Request for Reconsideration**\\n\\nWe sincerely hope that our detailed revisions and clarifications have addressed all concerns and demonstrated the strength and relevance of our work. We are committed to contributing to the field and believe that our paper offers significant value to both the research community and practical applications.\\n\\nWe kindly request the reviewers to consider our revisions and the efforts made to enhance the manuscript. Your support is crucial, and we hope that you will view our work favorably and consider raising your scores to reflect the improvements made.\\n\\nThank you once again for your time and valuable feedback.\\n\\nBest regards\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThanks for giving us time to clarify your question further!\\n\\n\\n\\n**Regarding W1 and Q3:** Yes, your understanding is almost correct. The only minor difference is that we treat $C(G)$ as embeddings instead of direct results. That means the downstream task should have its own task head to extract a correct answer from $C(G)$. **Take the graph classification as an example**:\\n\\n$G \\\\rightarrow GNN \\\\rightarrow pooling \\\\rightarrow g^{'} \\\\rightarrow task head \\\\rightarrow results$\\n (apparently your original understanding is a special case of this one)\\n\\nwhere $g^{'}$ is a graph-level embedding and our paper claims that there exist an optimal embedding denoted by $C(G)$ making downstream task perform well. Apparently, without knowing exact task head, we never know such optimal $C(G)$. In practice, we can get sub-optimal $C(G)$ by interatively tuning task head and front models. And more frequently in graph prompting area, both GNN and task head are given and fixed, in this case, we can find $C(G)$ by minizing target loss at the optimal point.\\n\\n\\n\\n**How about node classification?** The whole discussion of this paper is built upon graph-level tasks since we said in lines 96-99 that:\\n\\n> For node-level and edge-level tasks, many studies (Sun et al., 2023a; Liu et al., 2023) have proved that we can always\\nfind solutions to translate these tasks to the graph-level task\\n\\nBy concentrating on graph-level tasks, we provide a clearer evaluation of the effectiveness of graph prompting in approximating graph transformations.\\n\\n**Is graph manipulation always helpful for any downstream task / dataset?**\\n\\nIn this paper, we argue the poweful capability of graph prompt in manipuating graph data. However, is graph manipulation always helpful for any kind of downstream task (or dataset)? That is an open problem needing to be answered and that might be the reason why graph prompts sometimes do not \\\"work well\\\" in some cases as reported by some empirical study. Figuring out this problem deserve a hundred of new research papers and application studies. To this end, we avoid discussing detailed downstream task. Instead, we focus on one \\\"special\\\" task, the performance of which nearly defininetly relate to data manipuation. This task is:\\n\\n> changing part of edges/nodes etc from graph $G$ and get a new graph $G^\\u2019$, then we have two graphs, $G$ and $G^{\\u2019}$. The task target is to find the graph embedding of this manipulated graph $G^{\\u2019}$.\\n\\nIn this task, $C(G)$ means graph-level embedding of $G^{\\u2019}$ (ground truth and we can easily get this ground truth by simply calculating the pooling of the manipulated graph). The we consider how well does graph prompt approximate this ground truth and report the error between them. \\n\\nHowever, the above error is just one normal error value, not the upper bound, not the error distribution. Figure 1 should be responsible for Theorem 5, which studies the error upper bound. To this end, we randomly repeat the above setting several times and choose the max error as an approximation of the upper bound from Theorem 5, and that is the reason why we name the vertical axis of Figure 1 as \\\"empirical max error\\\".\\n\\n\\n**Regarding W1, W2 and Q3:**\\n\\nWe truly thank the reviewer for pointing out these questions. To make the paper more readable, we polish the above explanation in our revised paper (see Appendix C on page 33).\"}", "{\"comment\": \"Dear Reviewer q53a\\n\\n\\nThe discussion period is ending soon. Please let us know if we have solved your question. \\n\\n\\nKind regards,\"}", "{\"comment\": \"> Q5: Why do the authors not include the results of GPF-plus and All-in-one-plus in Section 5?\\n> \\n\\n**R-Q5:** Thank the reviewer for this question:\\n\\n- **Why All-in-One/GPF-Plus in Figure 1?** We do this **JUST** to confirm the following claim in our paper (see in Line 291-292) \\u201c*\\u2026more advanced graph prompts\\u2026 generally have a lower bound than the naive one\\u2026*\\u201d. This is a natural observation drawn from our theory analysis in section 4.1. We could have come to this observation by just simply comparing All-in-One and GPF, however, since they belong to two prompt categories (prompt as tokens, and prompt as a graph, with different inserting patterns), we think it would be more convincing to compare within different groups (GPF v.s. GPF-Plus, and All-in-One v.s. All-in-One-Plus).\\n- **Why no All-in-One/GPF-Plus in others?** From a theoretical perspective, there is no fundamental difference between these variants to their regular models. **Kindly note that our experiments are strictly corresponding to each Theorem we deduced,** all of which are built upon All-in-One and GPF. In particular, section 5.2 corresponds to Theorems 3, 4, and 9. Section 5.3 corresponds to Theorem 5. Section 5.4 corresponds to Theorem 6 and Theorem 7. Figure 2 corresponds to Theorem 8. We do this because we wish to **clearly present** the consistency between our theory and experiment to our readers, without too many trivial distractions. It is a very important principle for a theory-intensive paper to reflect their theory in nature.\\n- **We Act:** To further address the reviewer\\u2019s concern, we supplemented more experiments regarding All-in-One-Plus/GPF-Plus in our open code project. Please check this URL https://anonymous.4open.science/r/dgpwadopwta/supplement.pdf, from which we can see that there are no inconsistent observations that conflict with our theory analysis.\"}", "{\"comment\": \"Thanks for your reply.\\n\\nI would like to briefly clarify why I care about the formulation of All-in-One in your paper so much. You paper aims to do theoretical analysis of graph prompting studies, especifically for All-in-One and GPF. We can say at least 50% of your paper is related to or based on All-in-One. So the basic requirement of this study is to correctly formulate All-in-One and make it clear to readers. For example, when the construction of $X$ in Equation (23) and $e_N$ is not specified in your initial submission, readers may have trouble understanding Equation (23) and following the subsequent analysis.\\n\\nWhile I still think the inserting pattern introduced in the All-in-One paper is modifying node features with prompt nodes, I am now clear with your formulation in Equation (23) based on the revised PDF. Since the discussion period is ending soon, I think we may set it aside and move forward.\\n\\nIn the revised PDF, the authors mentioned that \\\"All-in-One-Plus treats the inserting pattern as learnable weights\\\". Could you provide more detailed explanation about this? The authors provide formal formulation of GPF, GPF-plus, and All-in-One in the paper but not for All-in-One-Plus yet.\"}", "{\"comment\": \"> W2: Incorrect formulation of All-in-One. According to Section 2, All-in-One obtains the prompt graph by connecting learnable prompt tokens with original graph nodes. However, All-in-one in Lemma 1 adds prompt tokens to node features, which conflicts with Section 2. Since most theoretical analysis in this study is related to All-in-one, the authors should at least provide the correct formulation of All-in-One as the basis of this paper.\\n> \\n\\n**R-2 (They Are Consistent):** We thank the reviewer for this comment.\\n\\n- **Section 2** described All-in-One as a general format.\\n- **Lemma 1** presented the transformation of the graph after the prompt. Please kindly note what we said (lines 853-854 in Lemma 1): \\u201c*Without loss of generality, we assume that the prompt subgraph in All-in-One has only one node.*\\u201d\\n- **Why?** Because we said (line 862): \\u201c*For All-in-One, the prompt subgraph is connected to graph a parameterized way, i.e.\\u00a0there exist parameters that control the connection relationship between **any two nodes** in the prompt subgraph and the original graph.*\\u201d\\n- **Conclusion:** Lemma 1 presents how feature transformed from any (token, node) pair. Equations (23-24) present what happened in inner graph models by adding a graph prompt (All-in-One). We do this to reduce the understanding bar for our readers.\\n\\nTo further address the reviewer\\u2019s concern, we have double-checked and updated anything we can do for more readable content.\"}", "{\"comment\": \"Dear reviewer,\\n\\nKindly note that Theorem 5 is an **UPPER Bound** of the error, which means the prompted error is within this upper bound. We give the details in our proof. \\n\\n\\n\\n> Furthermore, I have additional concerns regarding Theorem 7. Specifically, in Theorem 7, you claim that \\u201cthe eigenvalues of in datasets often exhibit an exponential decay.\\u201d How can you arrive at this conclusion without knowledge...\\n\\nWe are conducting a theory analysis, not an empirical one. Let's take an example for you: you might be familiar with math and you might learn such terms like \\\"implicit function\\\", etc, where we usually do not know its detailed format but it won't prevent our analysis to its core quality. Here we do not know the details of C, but it won't prevent our analysis. IF you really understand and read our paper, we do not think it is far-fetched to understand.\\n\\n\\nIF you changed your mind, we thank you. IF not, we thank you too because we understand it is difficult for a non-expert in this area to read such a theory-intensive paper. \\n\\n\\nKind regards.\"}", "{\"comment\": \"We thank the reviewer for your comment. According to other reviewers\\u2019 scores (8 and 6), ``your opinion is very important to us!`` Please let us know if you have any further questions or suggestions. We are warmly looking forward to your response and we are glad to discuss further with you!\\n\\nKind regards.\"}", "{\"comment\": \"I appreciate your efforts in providing further clarification. However, some of my concerns remain unaddressed, and I will elaborate on them in detail below.\\n\\n**Regarding W1.** If I understand correctly, $C(G)$ could be expressed as two-dimensional embeddings for nodes in binary classification problems. Specifically, if node $i$ belongs to class 0, we have $C(G)[i]=(1,0)$; otherwise, $C(G)[i]=(0,1)$. It will improve readability if you clearly explain $C(G)$ in a similar way.\\n\\n**Regarding W2.** $\\\\lambda$ and $\\\\mu$ are commonly used to denote the eigenvalues of a graph Laplacian and an adjacency matrix in graph learning literature. However, $\\\\lambda$ in this paper refers to an intermediate value used in your proof. I recommend using other notations to represent the errors of two parts, and more details should be provided to clarify these errors in the main text.\\n\\n**Regarding Q3.** I still do not fully understand how the error in Figure 1 is computed. The error represents the gap between the embeddings of the modified graph after graph prompting and the ground-truth $C(G)$. Is $C(G)$ the answer to the downstream task as I speculated in *Regarding W1*? Additionally, what specific downstream task and dataset are used in Figure 1?\\n\\nOverall, the presentation of your work needs improvement to help readers fully understand the concepts and results. Kindly note that you can upload a revised version of your paper during the ICLR rebuttal period to provide further clarification for me and other reviewers.\"}", "{\"comment\": \"> W3: The description of All-in-One-Plus in Section 4.1 is missing. The authors should provide the citation of this method and introduce it in the paper.\\n> \\n\\n**R-3. All-in-One-Plus v.s. All-in-One**: In the paper of All-in-One, the authors designed an inserting pattern between the prompt and the original graph as some cross-link between prompt tokens and the original nodes. These cross-links are given by the dot product between prompt tokens and input graph nodes and then use a tailored connection. All-in-One-Plus removed tailored operation, which means all prompt tokens are connected to all original nodes with learnable connection weights. This is just a very natural extension of All-in-One and also included in their paper. All-in-One-Plus and All-in-One come from the same paper. To further address the reviewer\\u2019s concern, we added this introduction to our paper.\\n\\n---\\n\\n> W4: Notations are inconsistent in the paper, which confuses readers a lot. For example, the authors use two different notations to represent the layer index of GNNs in the paper, while is also used to denote a column vector. The authors should use consistent notations to avoid ambiguity.\\n> \\n\\n**R-4**: We really thank the reviewer for this valuable suggestion!\\n\\n- We provide a glossary and default symbols meanings here for the reader\\u2019s convenience. Please see in Appendix A.1.2 (Table 2 and Table 3).\\n- To avoid ambiguity, each important formula is followed with a careful introduction of these notations. We try our best to ensure that our readers will not have notation misunderstanding **once they refer to the closest context.**\\n- **To further address the reviewer\\u2019s concern**, we followed the suggestion given by the reviewer and carefully checked our manuscript and updated notations.\\n\\n---\\n\\n> W5: Duplicated sentences should be avoided. For example, the paragraph Model Settings appears twice in Section 5.1 and Appendix C.\\n> \\n\\n**R-5**: We really thank the reviewer for this valuable suggestion! Following the suggestion given by the reviewer, we re-write this paragraph in our Appendix and avoid duplicated sentences.\"}", "{\"comment\": [\"Very good question! We thank you for your further reply, which finally clarifies what exactly confused you! Now, let's clarify this question, and hopefully, this time, we can finally clear up your confusion!\", \"You can treat $X_w$ as a collection of features from original nodes and the prompt tokens. However,$X_w$ itself can not define a detailed inserting pattern. The inserting pattern needs to be defined within $S_w$ (see equation 23), from which we can see that $l$ defines how to connect the prompt token to other original nodes (inserting pattern). and $S_{NN}$ defines how to organize the inner structure within a graph prompt (here we have one token).\", \"We think the reviewer might have some misunderstanding of the paper All-in-One because the original content of this paper said:\", \"> We can define the inserting pattern as the dot product between prompt tokens and input graph nodes, and then use a tailored connection...\", \"Here $l$ can be treated as such dot product, of course, it can also be treated as free parameters. (Dot produce is just one example given by the author of that paper, not the only way)\", \"We also dive into the open code of the paper All-in-One and ensure that its public implementation does not conflict with our formula.\", \"To further address the reviewer's concern, we sent an email to the author of All-in-One and asked them to help us double-check this issue, and we got clear confirmation from the authors of that paper.\"]}", "{\"comment\": \"Dear **Reviewer q53a**\\n\\nToday is Thanksgiving, a day to express our heartfelt thanks. We are grateful to every reviewer for giving their time in this phase and for walking alongside our paper. In the past few days, may everything we encounter become part of life\\u2019s beautiful scenery.\\n\\nWe hope our efforts in the past few days have clarified all your questions. Please do not hesitate to let us know if our previous efforts changed your mind positively.\\n\\nWe are glad to discuss further with you! Please do not hesitate to give us further feedback at your earliest convenience.\\n\\nKind regards.\"}", "{\"comment\": \"Thanks for your reply. When we denote $N-1$ nodes for All-in-One, the node feature matrix $X$ should have $N-1$ rows, right? So the question is: how can we obtain an $N$-row matrix $X_\\\\omega$ by $X_\\\\omega=X+\\\\textbf{e}_N \\\\textbf{p}^\\\\top$ in Equation (23)?\"}", "{\"comment\": \"Thank you for your further clarification. Regarding Figure 1, if I understand correctly, you change the graph structure and features of $G$ randomly and then set the loss function as the distance between the embeddings of $pooling(GNN(G))$ and the embeddings of $P_w(G)$ to optimize $P_w$. In this case, I believe Figure 1 cannot serve as evidence for Theorem 5 because: 1. Theorem 5 does not establish any relationships between the error and the rank; 2. Figure 1 does not reveal the relationship between the error and $||C(G)||$; 3. Even though $P_w$ can approximate the specific embeddings of $pooling(GNN(G))$ under the guidance of the distance loss function in this case, it does not imply that it can approximate the desired $C(G)$ with the guidance of the downstream task loss function.\"}", "{\"comment\": \"Dear Reviewer q53a:\\n\\nThanks for your further discussion. \\n\\n> I would like to briefly clarify why I care about the formulation of All-in-One in your paper so much...While I still think the inserting pattern introduced in the All-in-One paper is modifying node features with prompt nodes, I am now clear with your formulation..., I think we may set it aside and move forward...\\n\\n\\n**R1**: We totally understand and that is the reason why we explain this to you again and again with patience. However, please kindly note that **your understanding of All-in-One is Wrong**. ``It is unfair to give us a negative impression based on the wrong understanding of other papers``, which is largely caused by their presentation, not ours (Although we totally understand your misunderstandings and fully respect you). \\n\\nWe did a better job than the All-in-One paper with a more concise and clear presentation of the nature of All-in-One. And that is also an unexpected contribution: ``For many other readers to All-in-One like you, our paper may help them clear up their confusion``. This is very meaningful because **All-in-One is one of the most classic, most fundamental, and most straightforward models in this emerging area.** Anyone who knows something about graph prompts should apparently know it well, which can be even treated as a bar for this area researchers. \\n\\nWe are proud that our work may help the community understand it better, clearing up some potential misunderstandings about All-in-One. We thank you for setting this issue aside and moving forward. Trust me, you won't regret it. \\n\\n---\\n\\n\\n> In the revised PDF, the authors mentioned that \\\"All-in-One-Plus treats the inserting pattern as learnable weights\\\". Could you provide a more detailed explanation about this? The authors provide a formal formulation of GPF, GPF-plus, and All-in-One in the paper but not for All-in-One-Plus yet.\\n\\n\\n**R2**: You asked about similar problems before, please check our responses to your W3 and Q5. In addition, we copied our response to your previous questions here to help you understand more details on this: \\\"All-in-One-Plus treats the inserting pattern as learnable weights\\\":\\n\\nThe inserting pattern needs to be defined within $S_w$ (see equation 23), from which we can see that $l$ defines how to connect the prompt token to other original nodes (inserting pattern). and $S_{NN}$ defines how to organize the inner structure within a graph prompt (here we have one token). Here $l$ can be treated as such dot product (defined in All-in-One as their inserting pattern), of course, it can also be treated as free parameters (All-in-One Plus as mentioned in our paper). \\n\\n\\n\\nFeel free to let us know if you have any further question. \\n\\nKInd regards.\"}", "{\"title\": \"Letter of Thanks and Withdraw Declaration\", \"comment\": \"Dear Reviewer **VdLJ**, Reviewer **sjg4**, Reviewer **q53a**, and Area Chair,\\n\\n\\nToday is a hard day for us, but it is also a day to express our heartfelt thanks. We are grateful to every reviewer for giving their time in this phase and for walking alongside our paper. In the past few days, we were happy to see that all the reviewers clearly recognized the huge contributions and significance of our work to push graph AGI, especially graph prompting, forward. \\n\\n\\nWe also truly appreciate your constructive feedback, from which we realized that the main shortcomings of this theory-intensive paper currently is to further reduce the understanding bar for non-specialists or someone who lacks a basic math background.\\n\\n\\n**We decided to withdraw our paper, revise it harder, and submit it to the next high-level conference. We hope we can meet you somewhere else shortly and meet you at the next venue.**\\n\\n\\nSince you might know who we are soon, we kindly hope we can have a chance to cooperate with you in the future. Please go through our webpage to see our latest research and do not hesitate to help us increase our academic impact by spreading, citing, or discussing our work if you like.\\n\\n\\n\\n\\nSincerely,\\n\\nThe Authors.\"}", "{\"title\": \"Rebuttal to sjg4\", \"comment\": \"**We are truly grateful for your support to our work, and we\\u2019re moved to tears!** (\\u2565\\ufe4f\\u2565) (\\u2565\\ufe4f\\u2565) (\\u2565\\ufe4f\\u2565)\\n\\nWe hope below responses could address your questions and **encourage you to further champion and fight for our paper in the later reviewer discussion phase.**\"}", "{\"comment\": \"> W1: The description of the function C is vague and could be clarified using a specific task, such as binary classification. Additionally, C should be denoted as a function of a certain downstream task.\\n> \\n\\n**R-1:** We appreciate the reviewer\\u2019s suggestion about the clarity of *C*\\n\\n- In our paper, we defined $C$ as a mapping function that maps the original graph $G_{ori}$ to embedding $C(G_{ori})$, which achieves good performance in downstream tasks.\\n- To make this concept more concrete, we can illustrate through a binary classification task: Consider an optimal model $F$ that accurately performs binary classification on graphs, consisting of an encoder $E$ and a task head. The encoder $E$, which maps graphs to vector space, can be viewed as a specific implementation of the function $C$, here $C$ mostly refers to potential solutions to downstream tasks, which are usually unseen. We can treat it as some implicit function to the downstream task.\\n- However, there are lots of downstream tasks, making further discussing detailed $C$ impractical and very hard. Our paper focuses on the nature of graph prompts in manipulating graph data. However, the capability of graph data manipulation is not always needed in graph-based applications. Figuring out this is far out of the scope of this paper. Therefore, we avoid being trapped in detailed tasks so that we can **clearly present** the nature of graph prompt in theory to our readers, without too many trivial distractions. We believe it is a very important principle for a theory-intensive paper to reflect their theory in nature.\\n- To further reduce the bar of understanding for our reader, we follow the suggestion given by the reviewer and provide more explanation/discussion on $C$ in our paper.\\n\\n---\\n\\n> W2: The terms \\u03a6,\\u2006\\u03bc,\\u2006and\\u03bb in Equation (4) are not explained and should be clarified.\\n> \\n\\n**R-2:** Let us clarify the roles of *\\u03a6*, *\\u03bc*, and *\\u03bb* in Equation (4):\\n\\n- Regarding *\\u03bc* and *\\u03bb*: As mentioned in line 266, these terms represent a decomposition of the error into two multiplicative components. *\\u03bc* corresponds to model-dependent factors, depending solely on model parameters. *\\u03bb* corresponds to graph-dependent factors (given a fixed downstream task). We use \\u201ccorrespond to the model and graph\\u201d deliberately, as these are not explicit functions with definite computational procedures, but rather theoretical constructs representing these dependencies.\\n- Regarding *\\u03a6*: This is an angle measurement that solely depends on model parameters. It characterizes the landscape (expression space) formed by the embedding vectors when varying prompt parameters. Specifically, *\\u03a6*/2 provides an upper bound for the angle between the target embedding *C*(*G*) and the expression space. Intuitively, a more expressive model leads to a more flexible expression space, which results in smaller angles and thus tighter upper bounds. To reduce the understanding bar for our readers, we use the phrase \\u201ca measurement of the model\\u2019s expressiveness\\u201d to capture this intuition in a more accessible way.\\n- We give more details on these in Appendix A 3.3. We hope this explanation clarifies these. Please let us know if you have any further questions or need additional clarification.\\n\\n---\\n\\n> W3: The precondition in Corollary 1 is not specified and should be stated explicitly.\\n> \\n\\n**R-3:** We thank the reviewer for this problem. Kindly note the *\\u03f5* in Corollary 1 follows exactly the same definition and conditions as in Theorem 8. This corollary is derived directly from Theorem 8, and it is a basic quality of Chi distribution.\"}", "{\"comment\": \"Thanks for your reply.\\n\\nWhen we set $X$ by using the first $N-1$ rows for original $|\\\\mathcal{V}|=N-1$ graph nodes with a zero vector as the last row, computing $X_\\\\omega$ in Equation (23) means we replace the last row of $X$ with the prompt vector $p$, given that $e_N$ is a one-hot vector $[0, 0, \\\\cdots, 1]^\\\\top$ according to line 868. Therefore, All-in-One in Lemma 1 will convert the original graph $\\\\mathcal{G}$ with $N-1$ nodes to a prompted graph $\\\\mathcal{G}_\\\\omega$ with $N$ nodes by adding a new prompt node to the original graph, while the feature vector of each graph node is unchanged. However, I am afraid that this formulation is inconsistent with the design in All-in-One [1]. According to Section 3.3.4 in All-in-One [1], the inserting pattern is defined as adding prompt tokens to node features: $\\\\hat{x}_i = x_i+\\\\sum\\\\_{k=1}^{|\\\\mathcal{P}|} w\\\\_{ik} p_k$ ($|\\\\mathcal{P}|=1$ in your setting). Therefore, All-in-One should alter the feature matrix of the graph nodes. Could you help to clarify this?\\n\\n[1] Sun, Xiangguo, et al. \\\"All in one: Multi-task prompting for graph neural networks.\\\" KDD. 2023.\"}", "{\"title\": \"Rebuttal to VdLJ\", \"comment\": \"We truly thank your positive support of this work! According to other reviewers\\u2019 scores (8 and 3), your opinion is very important to us because it is just like a ``battleground state of the American Presidential Election`` : ) Below we respond to your questions and suggestions one by one. **Hope they will help encourage your further raising score!**\"}", "{\"comment\": \"> Q1: Why is the error related to the prompt design in Theorem 5? Intuitively, the term $||C(G)||$ appears to be related to the graph itself and the downstream task, suggesting it may not depend with the prompt design.\\n> \\n\\n**R-Q1:** Thanks for your question. Kindly note that $||C(G)||$ is talking about the **upper bound** of the error, not the error itself. Since $C$ is usually unseen (an optimal solution to the downstream task), and the graph prompt aims to approximate $C(G)$ by $P_w(G)$ and the frozen graph model $F_{\\\\theta}$, leading to $||F_{\\\\theta}(P_w(G))||\\u2004\\\\rightarrow\\u2004||C(G)||$. That means (although the upper bound of the error is related to the graph itself and the downstream task) the error itself in the graph prompting setting is also related to the prompt design in practice. We draw this discussion just to indicate that some intuitional findings from previous empirical work like the paper All-in-One are reasonable. In that paper, the authors empirically observed that the error of graph prompts in the approximation of graph data manipulations may relate to the non-linear layers of the graph model and the prompt design (please see our motivation lines 126-128 for your information), which now has more solid evidence from our theory.\\n\\n---\\n\\n> Q2: Does the number of non-linear layers affect the error bound in Theorem 5?\\n> \\n\\n**R-Q2:** Yes, it does. This is a very interesting and insightful question, which is informative to guide real engineering practice. We believe that if we didn\\u2019t offer our theory, answering this question would be very hard and intractable. But now, anyone who carefully reads our paper can systematically answer this question (although it might deserve a new research paper to fully answer this question by our theory, and we encourage other researchers/engineers to follow our work for further studying this question ). Here we give a basic analysis of this problem with the help of our theory (e.g.\\u00a0Theorem 5): The number of non-linear layers influences the error bound through its effect on *\\u03a6* via two competing effects:\\n\\n- Positive Effect: Increasing non-linear layers may enhance model expressiveness. This can lead to a decrease in the angle *\\u03a6*/2, potentially resulting in a tighter error bound.\\n- Counter Effect: With increased model expressiveness, $F_{\\\\theta^*}(G)$ evolve, the gap between the two embeddings, $F_{\\\\theta^*}(G)$ and $C(G)$, might increase, which could lead to a larger angle $\\\\Phi/2$.\\n- Empirical Support: Our experimental results (Figure 4) in the figures support the interplay between these competing effects.\\n\\n---\\n\\n> Q3: How is C(G) computed in Figure 1?\\n> \\n\\n**R-Q3:** *C*(*G*) means an optimal function to the downstream task, which is not accessible without a specific task. Since the ultimate purpose of graph prompting is to approximate graph operation, we here treat *C*(\\u22c5) as various graph data permutations such as adding/deleting nodes, adding/deleting/changing edges, and transforming features of a given graph *G*. Then we wish to see how well the graph prompt reaches *C*(*G*) by manipulating graph data with a graph prompt. Then *C*(*G*) can be treated as graph-level embedding after we change the given graph *G*.\"}" ] }
C1Wp4ubvXZ
FairlyUncertain: A Comprehensive Benchmark of Uncertainty in Algorithmic Fairness
[ "Lucas Rosenblatt", "R. Teal Witter" ]
Fair predictive algorithms hinge on both equality and trust, yet inherent uncertainty in real-world data challenges our ability to make consistent, fair, and calibrated decisions. While fairly managing predictive error has been extensively explored, some recent work has begun to address the challenge of fairly accounting for irreducible prediction uncertainty. However, a clear taxonomy and well-specified objectives for integrating uncertainty into fairness remains undefined. We address this gap by introducing FairlyUncertain, an axiomatic benchmark for evaluating uncertainty estimates in fairness. Our benchmark posits that fair predictive uncertainty estimates should be consistent across learning pipelines and calibrated to observed randomness. Through extensive experiments on 10 popular fairness datasets, our evaluation reveals: (1) A theoretically justified and simple method for estimating uncertainty in binary settings is more consistent and calibrated than prior work; (2) Abstaining from binary predictions, even with improved uncertainty estimates, reduces error but does not alleviate outcome imbalances between demographic groups; (3) Incorporating consistent and calibrated uncertainty estimates in regression tasks improves fairness without any explicit fairness interventions. Our benchmark package is designed to be extensible and open-source. By providing a standardized framework for assessing the interplay between uncertainty and fairness, FairlyUncertain paves the way for more equitable and trustworthy machine learning practices.
[ "Heteroscedastic", "Uncertainty", "Fairness", "Benchmark" ]
Reject
https://openreview.net/pdf?id=C1Wp4ubvXZ
https://openreview.net/forum?id=C1Wp4ubvXZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zZRFcmpZYR", "tdL0Qj6u8d", "smk19wigiy", "rdU4KCbefN", "pOnJidz5Bk", "p2Dpy8YyCP", "mmM2dcSbvx", "mXuqdCt4D7", "lyYxjxhgUU", "lCfj3cePgw", "iOUMDEdOJc", "ggaiOg8uzh", "eQo7owZlWX", "dEBokIHuoc", "coZwMv7M3f", "bdMM2Cc27g", "WSqks5atrp", "U2hV6Sblsh", "Qs6wfKvyB8", "Po7nx6HopF", "O5xVPTlHM1", "N1OnCNB8Pa", "JhmTiVWo1z", "J1g4PcoYUG", "IqBhO2b6TD", "IerOiLGKQ5", "GJs8lAwqtL", "AO4qGizkrw", "8YvNXkDD6a", "7iBeU809jJ", "6mZAbegc2n", "5A3vHJdLlP", "4mFdUuzu0P", "37UA8LlglL", "1Lwl5ZvvKH", "1H2YMWrGfP", "0olPAp6ScW", "0LkFgoTSpn" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732379570779, 1732595271454, 1732551901743, 1732455206997, 1732727338806, 1734729362561, 1732455314289, 1732215901906, 1732722377991, 1732658553079, 1732215700929, 1732656685598, 1732217111326, 1732215933339, 1732653613948, 1730480549118, 1733174327256, 1732217417672, 1732215978446, 1732217120286, 1732304248760, 1732217211572, 1732217430104, 1733268419025, 1730865921395, 1737523892407, 1732665122886, 1732455133648, 1730574347109, 1733174333220, 1732418805714, 1730740437898, 1732215547609, 1732551767848, 1730664723139, 1732217293041, 1733150668349, 1733244625307 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_m3oY" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_JxsV" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Area_Chair_bxVW" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_ufZ7" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_ufZ7" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_ufZ7" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_ufZ7" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_5aQp" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_JxsV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_5aQp" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_LGgY" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_LGgY" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_m3oY" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ], [ "ICLR.cc/2025/Conference/Submission8180/Reviewer_ufZ7" ], [ "ICLR.cc/2025/Conference/Submission8180/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Authors, thank you for your replies. After reading the comments, I do believe the paper can add value to the conference and would like to maintain my score\"}", "{\"title\": \"Reply to authors' feedback\", \"comment\": \"Thanks for the reply. I am glad to see my comments have been adopted. Additionally, I reviewed the links to the ICLR Call for Papers and some past papers. I agree that this paper is within the scope.\\n\\nI acknowledge the valuable contribution of this paper in developing a benchmark across multiple datasets focused on fairness and uncertainty. However, I have reservations about the contribution of this paper integrating uncertainty evaluation into fair machine learning and the overall impact of this integration on this field. I am inclined to maintain my current rating. Thanks.\"}", "{\"comment\": \"We realize that the end of the ICLR rebuttal phase is a particularly busy time! Still, if you have a chance, we\\u2019d love to hear your thoughts on whether our response has addressed your concerns, or if there\\u2019s anything else we can do to clarify.\"}", "{\"comment\": \"Thank you for taking the time to consider our revisions and rebuttal, and for raising your score! We\\u2019re available to address any remaining questions or concerns at your convenience before the deadline.\"}", "{\"comment\": \"As you said, we may have to agree to disagree on which exact philosophical framing of fairness and consistency makes the most sense. We do, however, appreciate the push to consider this alternative notion of consistency we've jointly developed, and we'll add the explanation and alternative definition to the paper to enrich that discussion.\\n\\nAs a last appeal, we'd like to point out that our results in **Figures 18 and 19** (which show differences between groups in terms of different abstention and error rates) could be easily modified under the definition of consistency between subgroups. We could plot the per group average uncertainty (y-axis) vs. varying the hyperparameters across different models (the *max_depth* and *reduction_thresholds* for XGBoost and the $\\\\alpha$ weight decay regularization parameter for the neural model). The current results in **Figures 18 and 19** seem to suggest that the per-group consistency would differ across different parameter settings, which would be good empirical results to pair with the new *Consistency Across Subgroups* definition. Let us know what you think of that proposal, and thank you again for continuing to engage with us in this review process!\"}", "{\"metareview\": \"This paper introduces FairlyUncertain, a tool that evaluate uncertainty in fairness for machine learning algorithms. This framework emphasizes consistent uncertainty estimation and be calibrated to observed randomness.\\n\\nThe paper falls a bit short in its contribution. The main observations and reported findings suffer from generalizability without further exploring variations in models, datasets and fairness measures. In addition, the paper can benefit from better justifying the choice of computing the consistency and calibration of models.\", \"additional_comments_on_reviewer_discussion\": \"After rebuttal, the reviewers had some remaining concerns that made them hesitate to recommend an acceptance, including the technical contribution being a little bit marginal.\"}", "{\"comment\": \"Thank you for the additional time and effort spent on our revised submission and rebuttal! If you have any further questions or suggestions, we\\u2019d be happy to address them before the deadline.\"}", "{\"comment\": \"Thank you so much for the time you took to review our paper and provide constructive feedback! We will respond to each of your comments/questions below, briefly summarizing the comment inline.\\n\\n> **Axiom 2.3 (consistency) \\u2026 would benefit from explicit clarification on the stability limits of uncertainty under various hyperparameter changes**\\n\\nW1. Thank you for the suggestion to clarify stability limits for consistency; we have adjusted Axiom 2.3 to this end as follows:\\n\\n$\\\\textbf{Consistency}~~$\\nLet $\\\\mathcal{P}$ and $\\\\mathcal{P}'$ be similar learning pipelines as per \\\\textbf{Definition 2.2}, differing only in hyperparameter $\\\\lambda_j$ with $|\\\\lambda_j - \\\\lambda_j'| \\\\leq \\\\tau_j$. Let $f$ and $f'$ be the predictive functions produced by $\\\\mathcal{P}$ and $\\\\mathcal{P}'$, respectively. Then there exists a non-decreasing function $\\\\delta_j: [0, \\\\tau_j] \\\\to \\\\mathbb{R}_{\\\\geq 0}$ with $\\\\delta_j(0) = 0$, such that for all inputs $(\\\\mathbf{x}, a) \\\\in \\\\mathcal{X} \\\\times \\\\mathcal{A}$, the uncertainty estimates satisfy,\\n$$|\\\\sigma(\\\\mathbf{x}, a) - \\\\sigma'(\\\\mathbf{x}, a)| \\\\leq \\\\delta_j(|\\\\lambda_j - \\\\lambda_j'|).$$\\nThis means that small changes in hyperparameter $\\\\lambda_j$ within the threshold $\\\\tau_j$ lead to controlled variations in uncertainty estimates, bounded by $\\\\delta_j$.\\n\\n\\n> **\\u2026addressing why abstention fails to impact fairness in this context could inform potential adjustments to the benchmark\\u2026explain conditions under which abstention could be counterproductive [to] deepen understanding of its impact on fairness**\\n\\nW2. Thank you for this comment, we have included a more in-depth discussion of abstention in the revised PDF, specifically addressing its impact on fairness metrics. While abstention is commonly used as a fairness strategy by leveraging uncertainty estimates, our findings demonstrate that it does not inherently reduce demographic disparities in binary classification tasks. In certain cases, abstention can inadvertently *introduce* or *exacerbate* biases.\\n\\nAs a simple, illustrative example, consider a binary classification model evaluated on two demographic groups, $A$ and $B$, each with $N_A = N_B$ examples. Both groups contain an equal number of positive ($Y = 1$) and negative ($Y = 0$) examples ($N_{A,1} = N_{A,0} = N_{B,1} = N_{B,0}$). Without abstention, the model predicts perfectly, achieving a true positive rate (TPR) and false positive rate (FPR) of $\\\\text{TPR}_A = \\\\text{TPR}_B = 1.0$ and $\\\\text{FPR}_A = \\\\text{FPR}_B = 0.0$, satisfying equalized odds. Say the model incorporates abstention based on uncertainty, and for group $A$, the model has arbitrarily low uncertainty and continues predicting perfectly ($\\\\text{TPR}_A = 1.0, \\\\text{FPR}_A = 0.0$). However, the model has \\\\textit{high uncertainty} for group $B$, and thus the model abstains on all positive examples ($Y = 1$), resulting in $\\\\text{TPR}_B = 0.0$, while still predicting negatives correctly ($\\\\text{FPR}_B = 0.0$). This abstention-induced disparity in TPRs ($\\\\text{TPR}_A - \\\\text{TPR}_B = 1.0$) violates the equalized odds fairness metric to an arbitrary degree. Though this an extreme example, we observe this *imbalance in subgroup abstention rates* in our experimental results, **highlighted in Table 3.** We have **included this example in Appendix E in the revised PDF (see Example E.1**). \\n\\n> **strengthen its analysis by discussing why certain methods perform better/worse across different datasets\\u2026demographic-specific breakdowns that could be better integrated into the main analysis to improve understanding of method reliability and demographic-specific impacts**\\n\\nW3. Thank you for highlighting these results. We have **extended our results to include experiments** examining per-group error rates in the abstention model. In **Appendix E of the revised paper, Figures 18 and 19** display the relationship between error rates and abstention rates for each protected group. In Figure 18, we set an overall abstention rate $r$ (shown on the $x$-axis) and plot the error rates for each protected group. Our results reveal that the algorithms yield varying error levels across protected groups on both the German and ACS Public Coverage datasets. In Figure 19, we present error rates (on the $x$-axis) alongside the abstention rates for each protected group. As expected, the random baseline exhibits no difference in abstention rates between protected groups, as its uncertainty estimates are random. In contrast, the Selective Ensemble demonstrates significant disparities in abstention rates across groups. This experiment adds valuable insight to our benchmark, and we plan to incorporate similar analyses as FairlyUncertain evolves. We welcome any suggestions for additional experiments!\"}", "{\"comment\": \"Yeah this seems like a closer match to what I would expect - I think this paper either should be reframed away from fairness a bit, or towards a definition of this rough flavor. That said it is a fairly significant edit, both from a theory + experimental perspective.\"}", "{\"comment\": \"Responding below:\", \"consistency_as_fairness_axiom\": \"you're right that we may have to agree to disagree here. There is some old fairness work that I've seen around Lipschitzness wrt input as a fairness property but that seems quite different from what you're looking at here. Re: your example, a change in hyperparameter which yields a change in output is not necessarily unfair - in fact, since hyperparameter choice has a global effect there is a chance that it treats all individuals identically. For instance, it's possible that a hyperparameter change uniformly reduces all uncertainty estimates by X% - I would not necessarily call this an unfair impact. Of course a uniform shift across the input space is unlikely but I think the point holds that the (un)fairness of the impact seems somewhat orthogonal to the size of the change (which could be 5 or 50%).\", \"new_def_of_consistency\": \"I think this looks fairly similar to me as far as my previous points are concerned.\"}", "{\"comment\": \"We appreciate the time you took to review our paper and suggest improvements! We will respond to each of your comments/questions below, briefly summarizing the comment inline.\\n\\n> **does not adequately justify the importance of the consistency and calibration axioms themselves within the context of predictive fairness**\\n\\nW1. We appreciate this comment greatly! After reviewing our paper body, we agree. To address this, we have included a short section in our revised PDF titled **\\u201cConnecting Consistency and Calibration to Fairness Principles.\\u201d** We will restate the content here, written in a slightly more conversational style, for convenience. \\n\\nUncertainty estimation is widely recognized as a crucial aspect of transparent machine learning practices (Bhatt et. al 2021, etc.). So how does this connect to fairness? Uncertainty arises due to variability in training data and randomness in learning algorithms (as we discuss in the intro, outline in Figure 1, Table 1, etc.), leading to a distribution of possible models rather than a single deterministic one. Ignoring this distribution can result in arbitrary decisions, especially for individuals whose predictions might vary across different modeling choices or sources of uncertainty. Such arbitrariness could disproportionately and unfairly affect minority groups in the data (e.g. affect fairness in a way that would be unaccounted for if only considering fairness measures related to the prediction).\\n\\nRecent work has shown that state-of-the-art fairness interventions can actually exacerbate this kind of \\u201cpredictive arbitrariness\\u201d (Long et. al 2024, for example). Models that are similar in terms of fairness and accuracy but have different parameters can assign vastly different predictions to individuals, and fairness constraints can intensify this issue. Our axiom of consistency for fair uncertainty estimation builds on this insight by asserting that uncertainty estimates should not vary significantly across similar learning pipelines. But consistently bad uncertainty estimation or subgroups in the data could satisfy this axiom while potentially failing to achieve fairness. Our axiom of calibration aims to prevent this case; a calibrated uncertainty estimate would avoid systematic biases in uncertainty estimates that could disadvantage certain groups. If, conversely, a model consistently underestimated uncertainty for a particular group, it could overstate its confidence in predictions for that group, leading to unfair treatment. Therefore, we argue that adhering to the axioms of consistency and calibration is a necessary component of any fair uncertainty estimation process.\\n\\n> **benefit from introducing the importance of fairness alongside uncertainty earlier, so readers understand the relationship between these two aims from the outset**\\n\\nW2. Again, we appreciate your constructive observation regarding clearer integration of fairness and uncertainty. As we noted in our response to (W1) above, we have included a section early in the paper that directly addresses this comment. To summarize, we argue that **consistency** ensures that uncertainty estimates are stable across similar models, preventing arbitrary disparities in predictions. **Calibration** ensures that uncertainty estimates are accurate across different groups, preventing systematic biases that could disadvantage certain groups. \\n\\n> **better suited for a benchmark track or tools/package track than a general research track**\\n\\nW3. Thank you for your feedback on the positioning of our contribution. While our paper does present a framework and accompanying package, we believe it offers more than an implementation - it introduces non-trivial experimental setups and metrics for evaluating uncertainty and fairness and provides insights into the results of extensive evaluations. Additionally, to your point about conference suitability, ICLR includes \\\"datasets and benchmarks\\\" among its Subject Areas (https://iclr.cc/Conferences/2025/CallForPapers), and there have a number of influential benchmarks published at ICLR on fairness related themes in the past couple of years (Han et. al 2024, https://openreview.net/pdf?id=TzAJbTClAz , Cruz et. al 2024 https://openreview.net/pdf?id=jr03SfWsBS , Zong et. al 2023 https://openreview.net/pdf?id=6ve2CkeQe5S , to name a few.)\"}", "{\"comment\": \"> **Def 2.2. - I think this may just be a clarity question\\u2026indicate that they may different in multiple hyperparameters,**\\n\\nAh, this is a good catch, thank you - we will change the definition such that it says: \\n\\n\\u201cTwo learning pipelines $\\\\mathcal{P}$ and $\\\\mathcal{P}'$ (Definition 2.2) are considered \\\\textit{similar} **if they differ only in a hyperparameter setting i.e.,** there is some $j \\\\in [m]$ for which $\\\\lambda_k = \\\\lambda_k\\u2019$ except when $j = k$ and $|\\\\lambda_j - \\\\lambda_j\\u2019| \\\\leq \\\\tau_j$.\\u201d\\n\\nThis now directly aligns with the consistency experiments that we run in our benchmark, where we fix randomness and vary a single hyperparameter across runs (thus, all results for each of our consistency plots compare similar learning pipelines according to this definition). Thanks again!\\n\\n> **Def of consistency as fairness axiom, relationship to fairness [i.e. comment \\u201c...b) it's super fairness related \\u2026\\u201d]**\\n\\nYour point is well taken. We likely won\\u2019t do much more to convince you that consistency is an important fairness consideration, besides to frame the following counterfactual: for two similar learning pipelines that produce nearly identical predictions on the same task, suppose their associated uncertainty estimates differ wildly due solely to a minor change in a single hyperparameter - perhaps the random seed for model initialization, or changing a regularization parameter from 0.5 to 0.6. Is it fair that such an insignificant difference leads to substantial disparities in uncertainty assessments? This variability means that individuals could receive markedly different uncertainty estimates purely because of an arbitrary hyperparameter choice, not because of any meaningful change in the data or model structure. We argue this is a fairness concern, but respect that you may view it as more of a model class stability property.\\n\\n> **Def of consistency as fairness axiom, invariance to hyperparameters [i.e. comment \\u201c..a) this definition achieves tha \\u2026\\u201d]**\\n\\nAfter further reviewing your comment, and our definition, we think that we could restate the consistency definition formally as follows, while still respecting the existing benchmark.\\n\\n**Consistency** For any two learning pipelines $\\\\mathcal{P}$ and $\\\\mathcal{P}'$ with hyperparameters $\\\\lambda$ and $\\\\lambda'$, let $\\\\delta(\\\\lambda, \\\\lambda')$ be a distance metric quantifying the difference between hyperparameters. The predictive functions $f_\\\\mathcal{P}$ and $f_{\\\\mathcal{P}'}$ produced by these pipelines should satisfy:\\n$$\\n\\\\| \\\\sigma - \\\\sigma' \\\\| \\\\leq L \\\\cdot \\\\delta(\\\\lambda, \\\\lambda'),\\n$$\\nwhere $\\\\sigma$ and $\\\\sigma'$ are the uncertainty estimates from $f_\\\\mathcal{P}$ and $f_{\\\\mathcal{P}'}$, respectively, and $L$ is a Lipschitz constant.\\n\\nWould stating it in this manner (or a similar manner) help with your suggestion to state the definition along the lines of \\u201cbounded change in output for any change in input?\\u201d If you believe this is a clearer definition, and more aligned with fairness, we\\u2019d be happy to update the definition of consistency in our paper.\\n\\n> **Calibration metrics \\u2026 adding more literature to the related work (eg \\\"Measuring Calibration in Deep Learning\\\" is an example paper from this literature)**\\n\\nThanks for calling our attention to this again; we appreciated your initial suggestion, as the ECE metric has strengthened the clarity of our calibration results as is! We are additionally working on putting together a discussion on the calibration literature, starting with the \\u201cMeasuring Calibration in Deep Learning\\u201d paper, and papers they cite/who cite them. We will post a version of that discussion here when we are done, for you to review, and then add it to our paper body.\\n\\n**Thank you for continuing to engage with our work, and for the suggestions. Please let us know if we can make other changes that you feel would strengthen the paper.**\"}", "{\"comment\": \"> **new calibration metrics to enhance FairlyUncertain?**\", \"q2\": \"This is a very good question. We default to negative log likelihood as it is very commonly used in loss function and has strong connection to information entropy. However, we have added results on another common calibration metric, Expected Calibration Error (ECE, Naeini et. al 2015). We also have plans to include other scoring rules (Spherical, Brier, etc.). We will discuss ECE in comparison to NLL here (and note this section has also been added in our revised PDF, in the appendix).\\n\\nExpected Calibration Error (ECE) and Negative Log Likelihood (NLL) will differ fundamentally in their model calibration assessment. Given predictions $[p_i]^N_{i=1}$, uncertainty estimates $[\\\\sigma_i]^N_{i=1}$, and true labels $[y_i]^N_{i=1}$, ECE groups predictions into $M$ confidence bins $[B_m]^M_{m=1}$ based on $p_i$, and computes calibration as \\n$\\\\text{ECE} = \\\\sum_{m=1}^M \\\\frac{|B_m|}{N} \\\\left| \\\\text{acc}(B_m) - \\\\text{conf}(B_m) \\\\right|$, where $acc(B_m) = \\\\frac{1}{|B_m|} \\\\sum_{i \\\\in B_m}$ $1$ $( \\\\hat{y}_i = y_i )$, while $conf(B_m)$ is the sum of each $p_i$ normalized by $\\\\frac{1}{|B_m|}$.\\n\\nIn contrast, our modified NLL incorporates uncertainty estimates directly by adjusting predicted probabilities as $\\\\tilde{p}_i = (p_i > 0.5) p_a + (p_i \\\\leq 0.5) p_b$, where $p_a = (1 + \\\\sqrt{1 - 4 \\\\sigma_i^2})/2$ and $p_b = (1 - \\\\sqrt{1 - 4 \\\\sigma_i^2})/2$. We then compute NLL in standard fashion e.g. $NLL = -\\\\frac{1}{N} \\\\sum^N [ y_i \\\\log(\\\\tilde{p}_i) + (1-y_i) \\\\log(1-\\\\tilde{p}_i) ].$\\n\\nECE provides an interpretable, aggregate view of calibration by measuring the alignment of predicted probabilities with empirical accuracy in confidence intervals (Naeini et al, 2015). However, it is sensitive to binning choices and lacks granularity at the individual prediction level. Our adjusted NLL method avoids binning, directly incorporating uncertainty estimates to evaluate calibration at a finer resolution, penalizing overconfident errors and underconfident correct predictions. While this makes NLL more sensitive to uncertainty quality, it may conflate calibration with model discrimination, and its dependence on predicted standard deviations assumes valid uncertainty estimates. We'd expect to prefer something like ECE for global calibration trends, while our NLL-based approach is suited to uncertainty-aware evaluation at the individual level. This makes the metrics complementary, and we thank the reviewer for the suggestion of adding additional calibration metrics to the benchmark, to better capture distinct aspects of calibration performance. **Comments and explanations for ECE have been added to Appendix Section D.1.**\\n\\n\\n> **Why does regression improve fairness without explicit interventions**\", \"q3\": \"Thank you for highlighting this result, we are happy to offer some thoughts on the matter here. The \\\\textit{Normal NLL} method likely improves fairness without explicit interventions due to its ability to account for heteroscedasticity in the data. Its loss function, $-\\\\log \\\\sigma + \\\\frac{1}{2}\\\\left(\\\\frac{y - \\\\mu}{\\\\sigma}\\\\right)^2,$ includes terms that regulate the variance ($\\\\sigma$), preventing it from becoming arbitrarily large while encouraging larger $\\\\sigma$ for high residuals ($|y - \\\\mu|$). This adaptive mechanism ensures accurate estimation of variance for each prediction. By modeling variance as a function of input features, which may include or correlate with the protected attribute $A$, the method captures group-specific heteroscedasticity.\\n\\nWhen predictions include uncertainty (e.g. assumed drawn from $y \\\\sim \\\\mathcal{N}(\\\\mu, \\\\sigma^2)$), the resulting cumulative distribution functions (CDFs) are smoothed, particularly for high-uncertainty predictions. This smoothing reduces sharp differences between groups, aligning the predictive distributions across protected groups. As \\\\textit{Normal NLL} is both consistent and calibrated, its uncertainty estimates reliably reflect true uncertainties, minimizing group-specific biases. This supports fairness by enabling the model to satisfy the uncertainty-aware statistical parity (\\\\textit{UA-SP}) condition: $\\\\Pr(\\\\tilde{f}(\\\\mathbf{X}, A) \\\\geq y \\\\mid A = a) = \\\\Pr(\\\\tilde{f}(\\\\mathbf{X}, A) \\\\geq y),$ where $\\\\tilde{f}$ incorporates group-specific uncertainties. To summarize, by smoothing predictions through accurate variance estimation, the method naturally reduces disparities across groups, and thus this likely explains why we observe some fairness improvements.\"}", "{\"comment\": \"> **expanding to deep learning models like MLPs, or transformers would align with recent trends in fairness research and test FairlyUncertain\\u2019s utility on more complex architectures**\\n\\nW4. This is an excellent comment, one that was echoed by other reviewers! We have included a neural network (linear layers with non-linear ReLU activations) tuned for tabular data classification in our experiments, and **added results to the appendix of the paper (see **Figures 9 and 13** in the revised PDF).**\\n\\nWe are excited to continue updating the benchmark with models from different model classes, and agree that expanding to recent transformer advances for tabular classification would be great! This is why we kept our evaluations general and our benchmark extensible.\\n\\n> **what extent does the authors' focus on uncertainty help the practical selection of fairness measures and mitigation strategies**\\n\\nQ1. Good question. Broadly, we believe an axiomatic approach is crucial to principally selecting fair algorithms in settings with unavoidable uncertainty. Our experiments demonstrate that algorithms designed to be \\u201cfair\\u201d vary widely in their ability to produce consistent and calibrated uncertainty measures. Practically, we find the Binomial NLL method to be the most calibrated and consistent method; as such, we would suggest at least using it as a baseline in practical experiments.\\n\\n> **be adapted for more complex models or architectures and extended to additional fairness notions, such as intrinsic fairness**\\n\\nQ2. Beyond the more complex models (e.g., the new neural network experiment described above), we would be happy to add additional fairness notions to complement the ones we already evaluate (e.g., statistical parity, equalized odds, equal opportunity, disparate impact, predictive parity, and difference in false positive rate). However, we\\u2019re unable to find a formal definition of \\u201cintrinsic fairness\\u201d, could you please point us in the direction of a work that defines this? Do you mean how fair the initial classification setup is, without fairness interventions? Thank you in advance for clarifying!\\n\\n> **societal implications of incorporating uncertainty in the selection of fair models\\u2026 model multiplicity highlights the uncertainty involved [in prediction] and explores its societal, legal, and philosophical implications\\u2026\\u201d**\\n\\nQ3. You are correct: predictive uncertainty estimation is not merely a technical consideration but has profound implications for fairness and justice in algorithmic decision-making (Bhatt et, al 2021, Cooper et. al 2023). Our research contributes the following insight: naively incorporating uncertainty into fair models can lead to unpredictable and potentially adverse outcomes for certain demographic groups. For example, our empirical analyses demonstrate that while abstention methods - where models defer decisions under high uncertainty - can reduce overall error rates, they do not necessarily improve fairness metrics such as statistical parity or equalized odds. This unpredictability may exacerbate existing disparities and undermine trust in these systems. Furthermore, the arbitrary application of uncertainty estimates might violate anti-discrimination laws and regulations. Uncertainty estimation is also integral to procedural justice (Rawls et. al 1971), which concerns the fairness of the methods and procedures used to arrive at decisions. By advocating for uncertainty estimates that are consistent across similar models and calibrated to actual data variability, we provide a more robust foundation for ethical algorithmic decision-making. We have added this discussion to a section titled **Societal Implications in Appendix Section A of our revised PDF.**\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for the response.\\n\\nDef 2.2. - I think this may just be a clarity question - the language says \\\"differ only in hyperparameter settings\\\" which seems to indicate that they may different in multiple hyperparameters, but the formalization suggests only 1 hyperparameter will differ\", \"def_of_consistency_as_a_fairness_axiom\": \"the new text connecting consistency to fairness principles is helpful. I still find it a bit odd in the sense that Lipschitzness and invariance are somewhat separate goals - for instance, a Lipschitz function can still respond quite wildly to large changes in input. I would instead think of a natural objective as being, for instance, bounded change in output for any change in input. I also wonder how these definitions correspond to various reparameterizations of the hyperparameter space. In any case, I do think invariance to hyperparameters is a reasonable thing to want but I'm not totally convinced that a) this definition achieves that, or b) it's super fairness related (although the added argument helps)\", \"calibration_metrics\": \"ECE is a fine metric to use in the table, I don't think others are necessary - I do think there's more of a literature out there that might be work looking at in the related work (eg \\\"Measuring Calibration in Deep Learning\\\" is an example paper from this literature)\\n\\nI think there are useful pieces of this paper but I'm not sure I'm ready to raise my score here. I will take another look at the other reviews as well.\"}", "{\"summary\": \"This paper presents the FairlyUncertain benchmark, which is aimed at measuring the fairness properties of model uncertainty estimates. They focus on two metrics which are called consistency and calibration, and explore the results on these metrics across several algorithms and several datasets, concluding that probabilistically-grounded methods tend to return the best results. They explore the ramifications of abstention on fairness metrics, and propose a new uncertainty-aware form of statistical parity.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"this is an important topic for which there are no currently existing benchmarks that I\\u2019m aware of, I think this could be useful for people\", \"good level of thoroughness here exploring the various different metrics across datasets\", \"abstention experiments are interesting - surprising to me that abstention doesn\\u2019t improve most fairness metrics\", \"uncertainty-aware Statistical Parity is a good idea - incorporating the uncertainty estimate into a CDF-based metric makes sense\"], \"weaknesses\": [\"I\\u2019m not sure I agree with the characterization (around L90) of A/B types as unmeasurable and C/D/E types as estimatable from a fixed sample. I think both of these statements are conditional on various distributional assumptions - for instance, you can measure individual variance if you assume you know the mean (or its parametric form and inputs). I think explicating the underlying data generating process would make this whole contribution more substantial\", \"A couple issues with formalisms in Section 2: 1. I don\\u2019t understand Def 2.2 - \\u201cdiffer only in hyperparameter settings\\u201d and \\u201cthere is some j\\u2026\\u201d seem to say different things: the first implies P and P\\u2019 are in some shared \\u201cpipeline class\\u201d parametrized by different hyperparameters, the second implies that just one hyperparameter differs. 2. On Line 127, f(P) is used - this seems wrong to me, as on L117, f is named as the *output* of P, not a function of it\", \"I\\u2019m not sure I agree with the given definition of consistency as a useful fairness axiom - it seems like more of a Lipschitz condition over what a well-parametrized pipeline might look like. My impression of a fairness axiom of this type might be of the flavor that uncertainty estimates should be as invariant as possible to hyperparameter choices (I\\u2019m not sure this is a good axiom either but it seems more fairness-related).\", \"there is a rich literature on calibration metrics which is not connected to at all in this paper - it would be good to get a better understanding of how those relate to the chosen metrics\", \"L287 and elsewhere: for consistency, it seems like a non-robust choice to output a maximum std. Dev over individuals - I\\u2019m wondering if like a 90th percentile might be a better choice\", \"L391 - it seems odd to optimize an objective function that mixes statistical parity and equalized odds. Usually, one or the other is picked, since they are very different objectives\", \"Def 6.2 - it seems overly specific to me to assume that the model outputs are parametrizing a normal distribution. In particular, I don\\u2019t think this makes sense in the binary Y case - and the end of the definition (CDF comparison for all outcomes y \\\\in Y) I think doesn\\u2019t make sense in the binary y case either (rather you want this to be true at all points in the range)\"], \"small_points\": [\"L142: \\u201ca model that always returns 0 is consistent\\u201d - by the previous definition this is not true, as the definition of consistency applies to a pipeline class, rather than the model which is outputted. Such a pipeline may output a constant-0 model for some set of hyperparameters, but not others.\", \"how is clustering done for calibration? I don\\u2019t think this is discussed\", \"Table 4: not sure how Disp. Impact is defined\"], \"questions\": [\"I\\u2019m not sure the framework in Table 1 does a ton for me - doesn\\u2019t really connect to the rest of the paper at the moment so tightly and I\\u2019m not sure what the takeaways are. I would be interested to see a clearer explication of how these types of uncertainty connect to epistemic/aleatoric\", \"would like to see a more fleshed out argument for the consistency axiom\", \"would be interested to know more about connections to current calibration literature\", \"how much do results change if each fairness metric is optimized for individually (in combination with error)\", \"is there a more general version of the UA-SP definition?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We hope that you had a restful (and delicious) holiday period, thank you for getting back to us!\\n\\nThis makes sense, and we appreciate your renewed openness. We believe that the following is within the purview of the review process: the proposal above, which is to introduce the new consistency across subgroups definition after the original definition, clearly define the contrast, and offer the consistency experiments as suggested. \\n\\nAs we cannot revise the paper PDF any longer, **we have added the new section (slightly abbreviated below), and note that most of this new section follows the existing introduction of *consistency* / fairness philosophy in the current paper body, and some of which will be deferred to experimental results section. We have also added experiments on the Binomial NLL method with XGBoost on all the datasets to the main paper body from the below figures, and deferred the remaining experiments (on the other uncertainty methods / neural model) to the Appendix.**\\n\\n### Refining Consistency: Consistency Across Subgroups \\nThe concept of *consistency* (as defined in Axiom 2.4) posits that similar learning pipelines should produce similar uncertainty estimates. This axiom emphasizes that uncertainty estimates should be a function of the data rather than arbitrary artifacts of the learning pipeline. However, one might argue that this does not necessarily connote unfairness; a minor adjustment in a hyperparameter could uniformly reduce all uncertainty estimates by a certain percentage. Such a uniform change *might* be deemed acceptable; fairness concerns would then only arise if the change disproportionately affects uncertainty estimates for protected groups. Though this scenario may be unlikely, it is important to address explicitly when defining *consistency* in a fairness context. Thus, we propose the following as an alternative *consistency* definition: **Consistency Across Subgroups**, which formalizes the idea that minor hyperparameter adjustments should not introduce significant disparities in uncertainty estimates across different subpopulations.\"}", "{\"comment\": \"Thank you for your thorough review and the suggestions to enhance our paper! We will respond to each of your comments/questions below, briefly summarizing the comment inline.\\n\\n> **disagree with characterization of A/B types as unmeasurable and C/D/E types as estimatable from a fixed sample\\u2026explicating the underlying data generating process would make this whole contribution more substantial\\u2026interested to see a clearer explication of how these types of uncertainty connect to epistemic/aleatoric**\\n\\nW1/Q1. Good point! We agree this characterization of uncertainty is not tied to the paper, and does not contribute much. As such, we\\u2019ve moved it to the appendix. Our original intention was to provide a more nuanced taxonomy of aleatoric and epistemic uncertainty.\\n\\n> **Def 2.2 - \\u201cdiffer only in hyperparameter settings\\u201d and \\u201cthere is some j\\u2026\\u201d seem to say different things: the first implies P and P\\u2019 are in some shared \\u201cpipeline class\\u201d parametrized by different hyperparameters, the second implies that just one hyperparameter differs**\\n\\nW2. Good point! In defining \\u201csimilar learning pipelines\\u201d, we are walking the line between generality and formalization. Our intention is to define $P$ and $P\\u2019$ in some shared \\u201cpipeline class\\u201d, but then we need to define what this pipeline class means. Since hyperparameter values are on different scales (e.g., learning rate and number of layers), it becomes difficult to talk about $P$ and $P\\u2019$ that differ in multiple hyperparameters but are still \\u201cclose\\u201d. Hence, we formalize the pipeline class as $P$ and $P\\u2019$ that differ only in a single variable, at the cost of generality. This is a tradeoff we are willing to accept because, from a practical perspective, we use consistency as a one-sided test. As such, we would rather have a formal test that, if an algorithm fails it, tells us that the algorithm is not consistent rather than a more general test that is difficult to implement. That said, we welcome discussion of another formal definition of pipeline class.\\n\\n> **On Line 127, f(P) is used - this seems wrong to me, as on L117, f is named as the output of P, not a function of it**\\n\\nW2. Yes, the notation $f(P)$ to describe a trained model created from a learning pipeline $P$ is strange; we have updated it to $f_P$ to indicate that the model $f$ was produced by learning pipeline $P$.\\n\\n> **I\\u2019m not sure I agree with the given definition of consistency as a useful fairness axiom - it seems like more of a Lipschitz condition over what a well-parametrized pipeline might look like. My impression of a fairness axiom of this type might be of the flavor that uncertainty estimates should be as invariant as possible to hyperparameter choices\\u2026more fleshed out argument for the consistency axiom.**\\n\\nW3/Q2. The intention of the consistency axiom is to be invariant to hyperparameter choices, as you suggest. The challenge is that clearly the predictions have to have some dependence on hyperparameters (otherwise the hyperparameters don\\u2019t matter). Hence we chose to formalize invariance as requiring that small changes in hyperparameters do not substantially change the predictions. We chose this formalization because it is the closest to the invariance goal that we have come up with, and we would very much welcome another formalization that more closely aligns with invariance.\\n\\n> **there is a rich literature on calibration metrics which is not connected to at all in this paper - it would be good to get a better understanding of how those relate to the chosen metrics / would be interested to know more about connections to current calibration literature**\\n\\nW4/Q3. The literature that we are familiar with on calibration metrics broadly fall into either:\\n\\n1. Expected Calibration Error (ECE): This approach buckets observations by the uncertainty predictions and computes the difference between the average predicted uncertainty and the average observed uncertainty. Instead of reporting this as a metric (e.g., a weighted average), we plot the predicted versus observed uncertainty in our \\u201cqualitative\\u201d calibration experiments.\\n\\n2. Negative Log Likelihood (NLL): This approach makes an assumption on the underlying distribution and measures how likely we are to observe the outcomes that we did if the predicted parameters of the assumed distribution were correct. We report this in our \\u201cquantitative\\u201d calibration experiments.\\n\\n**We have added discussion on ECE vs. NLL in Appendix Section D.1 in the revised PDF, and have reported calibration metrics for ECE in Table 7, also in the Appendix (these results confirm our results on the NLL calibration metric).** Additionally, in the final version of the paper, we would be happy to discuss more literature on calibration metrics, especially in relation to our work. Please let us know if there are additional calibration metrics that you think we should report (beyond the ECE plots and the NLL values).\"}", "{\"comment\": \"Thank you for investing your time in reviewing our submission and for your constructive comments! We will respond to each of your comments/questions below, briefly summarizing the comment inline.\\n\\n\\n> **Findings on reducing errors but not outcome imbalances could be expanded**\", \"w1\": \"We have expanded our results to include experiments on per-group error rates in the abstention model. In Appendix E of the revised paper, Figures 18 and 19 plot the error rate for each protected group versus abstention rate. In Figure 18, we specify an overall abstention rate $r$ (on the $x$-axis) and plot the error rate on each protected group. We find that all the algorithms produce different levels of error between protected groups on the German and ACS Public Coverage datasets. In Figure 19, we plot the error rate (on the $x$-axis) and the abstention rate for each protected group. As we expect, the random baseline has no difference in abstention rate between protected groups (the uncertainty estimates are random). In contrast, Selective Ensemble has a substantial difference in abstention rates. This experiment has added depth to the benchmark, and we will continue adding similar experiments as FairlyUncertain grows! Please let us know if you have suggestions for another experiment.\\n\\nIn addition, your point about reducing errors but not outcome imbalances has been echoed by other reviewers. When abstention is employed as a fairness strategy by utilizing uncertainty estimates, our results reveal that it does not necessarily reduce demographic disparities in binary classification tasks. In fact, abstention can sometimes create or amplify biases. To illustrate this with a straightforward example, consider a binary classification model evaluated on two demographic groups, $A$ and $B$, where each group contains $N_A = N_B$ examples. Both groups have an equal distribution of positive ($Y = 1$) and negative ($Y = 0$) examples ($N_{A,1} = N_{A,0} = N_{B,1} = N_{B,0}$). Without abstention, the model performs perfectly, achieving true positive rates (TPRs) and false positive rates (FPRs) of $\\\\text{TPR}_A = \\\\text{TPR}_B = 1.0$ and $\\\\text{FPR}_A = \\\\text{FPR}_B = 0.0$, thereby satisfying equalized odds. Now suppose the model incorporates abstention based on uncertainty. For group $A$, the model maintains very low uncertainty and continues to predict perfectly ($\\\\text{TPR}_A = 1.0, \\\\text{FPR}_A = 0.0$). In contrast, the model exhibits high uncertainty for group $B$, leading it to abstain on all positive examples ($Y = 1$), resulting in $\\\\text{TPR}_B = 0.0$, while still correctly classifying negatives ($\\\\text{FPR}_B = 0.0$). This abstention-driven gap in TPRs ($\\\\text{TPR}_A - \\\\text{TPR}_B = 1.0$) introduces a significant violation of the equalized odds fairness metric. While this is an extreme case, we observe similar subgroup disparities in abstention rates in our empirical findings, detailed in Table 3. This example is further explained in Appendix E of the revised PDF (refer to **Example E.1**).\\n\\n> **fully explore variations in models, parameters, or datasets**\", \"w2\": \"Our focus was on constructing a novel set of evaluations according to the principles of consistency and calibration, and on running across a variety of datasets and for a variety of metrics. We agree that including more models, parameters, etc. would strengthen the robustness of the results. To that end, we have included results for a neural network (linear layers with non-linear activations, see **Figures 9 and 13** in the revised PDF)\\n\\nOur benchmark is also highly extensible and we will continue to update with new model architectures, hyperparameter variations, and relevant datasets.\\n\\n> **plans to include additional fairness interventions**\", \"q1\": \"Yes, we have plans to include an extensive array of fairness intervention techniques. To demonstrate the robustness of our results **in Table 3,** we have added results on the state-of-the-art FairGBM in-processing fairness method (Cruz et. al 2022, 2024) for the revised rebuttal PDF. This is a variant of the strong LightGBM predictor (Ke et al, 2017), but with fairness constraints on the objective. We have two variants, FairGBM SP and FairGBM EO, each tuned for those respective fairness metrics. Note that this method performs well on fairness metrics, and has a lower error rate than some of the other fairness interventions, but still cannot match the error rate performance of the methods allowed to abstain (for the same reasons we discuss in Section 5).\"}", "{\"comment\": \"> **generalizability of results**\", \"q4\": \"This is a good question! We specifically chose a reasonably large, representative sample of commonly available fairness benchmark datasets (including general support for the ACS folktables package (Ding et. al 2021, https://github.com/socialfoundations/folktables) which itself contains many different data scenarios). We believe we have already shown pretty extensive results in the tabular data setting, and are excited to continue expanding and updating the available datasets to demonstrate the robustness of our findings.\"}", "{\"title\": \"Re: Official Comment by Authors\", \"comment\": \"Thank you for your replies. I feel positive about your revision and I am willing to raise my score by 1.\"}", "{\"comment\": \"Thank you for taking the time to review our submission and for offering suggestions for improvement! We will respond to each of your comments/questions below, briefly summarizing the comment inline.\\n\\n\\n> **clarify connection between \\\"Fairness\\\" and \\\"Consistency/Calibration\\\"**\", \"w1\": \"Thank you for this insight; we note that similar concerns were raised by other reviewers, and we have addressed them by adding a new section titled **\\\"Connecting Consistency and Calibration to Fairness Principles\\\"** in our revised PDF.\\n\\nTo clarify briefly, the connection between fairness and our axioms of consistency and calibration lies in how uncertainty estimates impact equitable decision-making. While our consistency metrics (e.g., standard deviation or p-values) are computed over different ensembles without explicit reference to sensitive attributes, they relate to fairness by mitigating arbitrary variability in predictions that could disproportionately affect sensitive groups. Ensuring consistent uncertainty estimates across models helps prevent unfair treatment arising from randomness in the learning process.\\n\\nRegarding calibration, we acknowledge that Axiom 2.4 involves groups defined by sensitive attributes. In our empirical evaluation, we grouped individuals with similar uncertainty estimates to assess calibration. However, we have clarified in the revised PDF that groups based on sensitive attributes are also essential for evaluating calibration in the context of fairness. Proper calibration ensures that uncertainty estimates are not systematically biased against any sensitive group, thereby promoting fair outcomes. Overall, we argue that consistency and calibration are crucial for fair uncertainty estimation because they help prevent arbitrary and biased predictions that could harm individuals (and furthermore exacerbate group disparities based on sensitive attributes). \\n\\n> **the methodology is limited to ensemble methods\\u2026how can this be generalized to a wider class of ML methods (e.g., neural networks)?**\", \"w2\": \"We\\u2019d like to emphasize that our framework is not limited to ensemble methods. In fact, any model that can produce both a probability of a label (in the binary case) or a real value, is admissible. Still you bring up a valid point, also by other reviewers; to demonstrate that our framework is highly extensible to other methods, we have rerun experiments on neural networks with linear layers and non-linear ReLU activation functions (varying the $\\\\alpha$ weight decay regularizer parameter). **You can see the results for this additional model class in **Figures 9 and 13** in the revised PDF.**\"}", "{\"comment\": \"> **for consistency, it seems like a non-robust choice to output a maximum std. Dev over individuals - I\\u2019m wondering if like a 90th percentile might be a better choice**\\n\\nW5. Good point! We will run the final version of our experiments with the 90th percentile of the standard deviation. For now, we note that the max std aligns with the fuller summary of standard deviations. For example, in Figure 2, Ensemble is the most consistent, closely followed by Binomial NLL, while Self-(in)consistency and Selective Ensemble are the least consistent.\\n\\n> **odd that objective function is normalized sum of multiple fairness metrics**\\n\\nW6/Q4. This is a helpful remark\\u2014thank you! **We have updated Table 3 in the revised paper PDF. Now, we have one set of algorithms whose abstention rate is optimized for statistical parity and another set of algorithms whose abstention rate is optimized for equalized odds. With these updates, we find the same results: While the abstention framework allows models to reduce their error rate, it does not magically reduce the imbalance in outcomes between demographic groups.\\n\\n> **Def 6.2 - it seems overly specific to me to assume that the model outputs are parametrizing a normal distribution. In particular, I don\\u2019t think this makes sense in the binary Y case\\u2026**\\n\\nW7/Q5. You are correct! UA-SP does not make sense as a fairness metric in the binary setting; it is specific to the regression setting. In the regression setting, we make the assumption that the heteroscedastic uncertainty is normally distributed. We could also state UA-SP for a more general distribution class with more parameters but, for simplicity, we chose to state the UA-SP definition in the context of the normal distribution. Note that this assumption may be more or less applicable depending on the uncertainty. \\n\\nWe could have also stated UA-SP in a more general sense, given here:\\n\\n$\\\\textbf{Uncertainty-Aware Statistical Parity} (\\\\textit{UA-SP})~~~$ Consider a function $f: \\\\mathcal{X} \\\\times \\\\mathcal{A} \\\\to \\\\mathbb{R}^2$ that estimates a mean $\\\\mu$ and a standard deviation $\\\\sigma$. The predictions induce a randomized function $\\\\tilde{f}: \\\\mathcal{X} \\\\times \\\\mathcal{A} \\\\to \\\\mathcal{Y}$ that samples $y$ from a probability distribution with mean $\\\\mu$ and variance $\\\\sigma^2$ and PDF $P(y; \\\\mu, \\\\sigma^2)$. Then $f$ satisfies uncertainty-aware statistical parity if, for all protected groups $a \\\\in \\\\mathcal{A}$ and outcomes $y \\\\in \\\\mathcal{Y}$,\\n$$\\nPr(\\\\tilde{f}(\\\\mathbf{X}, A) \\\\geq y \\\\mid A = a) = Pr(\\\\tilde{f}(\\\\mathbf{X}, A) \\\\geq y).\\n$$\\n\\nIf you would like, we would be happy to update our definition of UA-SP to be the more general form in the revised PDF, and then specify that for practical purposes when evaluating, we set $P(y; \\\\mu, \\\\sigma^2) = \\\\mathcal{N}(y; \\\\mu, \\\\sigma^2)$.\\n\\n*Small point weaknesses:* We appreciate these notes! We have updated such that it says \\u201ca pipeline that always leads to the same predictions is consistent but not meaningful.\\u201d When you say \\u201cclustering,\\u201d we assume you speak of experiments like the one presented in Figure 3; we group individuals based on the percentile buckets they fall into over their uncertainty estimates, and will clarify this in the final version. Disparate impact in Table 3 (of the revised PDF) is defined according to the ratio given in Definition I.4 (the last page of the Appendix). We will clarify that the fairness defintions/metrics can be found there in the final version of our paper.\"}", "{\"comment\": \"We thank all reviewers for their time and effort in reviewing our paper. Below, we provide a global overview of the updates we made during the rebuttal process:\\n\\n1. **Clarified the connection between fairness and our axioms of consistency and calibration**. We added a new section titled \\\"Connecting Consistency and Calibration to Fairness Principles\\\" to strengthen the philosophical foundation of our approach.\\n2. **Expanded our experimental results.** We added new experiments on per-group error rates and the impact of abstention on fairness metrics. We also extended our existing experiments with additional architectures, calibration metrics, and fairness definitions.\\n3. **Refined our definitions and formalizations.** Based on reviewer feedback, we enhanced the clarity and completeness of our presentation, particularly in the definitions of consistency and the formalization of our fairness/uncertainty axioms.\\n\\nOnce again, thank you very much to the area chair and all reviewers!\"}", "{\"summary\": \"The paper introduces FairlyUncertain, a Python package designed to evaluate uncertainty in fairness for machine learning algorithms. The package proposes methods and standards to incorporate uncertainty estimation into fairness evaluations in predictive models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper identifies and addresses the underexplored intersection of fairness and uncertainty in predictive modeling.\\n2. The study spans multiple datasets and predictive tasks, enhancing the benchmark's relevance and applicability.\\n3. Designing the benchmark as an open-source tool encourages ongoing development and community contributions.\", \"weaknesses\": \"1. The paper targets uncertainty as a critical factor in predictive fairness, given the real-world impact of machine learning decisions. However, it does not adequately justify the importance of the consistency and calibration axioms themselves within this context. Specifically, while uncertainty is broadly recognized as significant for transparent AI, the paper could strengthen its justification by explaining why consistency and calibration specifically matter for fair uncertainty estimation.\\n\\n2. Although the paper targets \\\"fair uncertainty,\\\" the fairness component feels less integrated with uncertainty until Section 5, where it discusses the abstention framework and Uncertainty-Aware Statistical Parity. The paper could benefit from introducing the importance of fairness alongside uncertainty earlier, so readers understand the relationship between these two aims from the outset. \\n\\n3. This paper is largely an implementation paper, presenting a framework and accompanying package for uncertainty-fairness evaluation rather than introducing fundamentally new theoretical insights into either fairness or uncertainty estimation. Its main contribution lies in providing a package that can support standardized evaluations of uncertainty estimation methods within fairness contexts. Therefore, it may be better suited for a benchmark track or tools/package track than a general research track.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"> **Consistency as fairness axiom: a change in hyperparameter which yields a change in output is not necessarily unfair \\u2026**\\n\\nUnderstood! We propose the following path forward, which should serve to both address the point you\\u2019re making and broaden the paper\\u2019s discussion on fairness (to acknowledge your example of how swings in the magnitude of uncertainty are not inherently unfair under one interpretation of what constitutes fairness, but are not accounted for under the stated definition of consistency). Perhaps we can both agree that fairness concerns do arise when such large swings based on small hyperparameter changes disproportionately affect uncertainty estimates for protected groups in the data. Thus, one goal would be to ensure that minor hyperparameter adjustments do not introduce significant disparities in uncertainty estimates across different subpopulations i.e. consistency **across subgroups.** \\n\\nWe could formalize this as follows,\\n\\n**Consistency Across Subgroups.** Let $\\\\mathcal{P}$ and $\\\\mathcal{P}'$ be two similar learning pipelines differing only in hyperparameters $\\\\lambda$ and $\\\\lambda'$, with $\\\\delta(\\\\lambda, \\\\lambda')$ measuring the change in hyperparameters. For all protected groups $a, a' \\\\in \\\\mathcal{A}$, the change in uncertainty estimates should not disproportionately affect one group over another. Specifically, we require: $$ | (\\\\sigma_a - \\\\sigma_a' ) - ( \\\\sigma_{a'} - \\\\sigma_{a'}' ) | \\\\leq L \\\\cdot \\\\delta(\\\\lambda, \\\\lambda'), $$\\nwhere, $\\\\sigma_a = \\\\mathbb{E} [ \\\\sigma(x) ]$ is the average uncertainty estimate for group $a$ under pipeline $\\\\mathcal{P}$, $\\\\sigma_a' = \\\\mathbb{E} [ \\\\sigma'(x) ]$ is the average uncertainty estimate for group $a$ under pipeline $\\\\mathcal{P}'$, $L$ is a Lipschitz constant, and $\\\\delta(\\\\lambda, \\\\lambda')$ quantifies the change in hyperparameters (note the expectations are taken under ${x | A = a}$).\\n\\nWe could include both definitions for completeness, and note the philosophical distinction in what constitutes *fair consistency,* and how this leads to a more general definition or a more specific one. Does this seem like a reasonable way to strengthen the paper and adjust our definition of consistency to account for the points you brought up?\"}", "{\"comment\": \"Thank you so much for the time you spent reading our revised paper and rebuttal and for improving your score! We'd be happy to respond to additional questions or comments as well.\"}", "{\"summary\": \"The authors introduce FairlyUncertain, an axiomatic benchmark for evaluating uncertainty estimates in fairness. The benchmark posits that fair predictive uncertainty estimates should be consistent across learning pipelines and calibrated to observed randomness. FairlyUncertain suggests that: In the binary setting, natural uncertainty estimates beat complex ensemble-based approaches and abstaining improves error but not imbalance between demographic groups. In the regression setting, consistent and calibrated uncertainty methods can reduce distributional imbalance without any explicit fairness intervention.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The organization and writing are fairly clear, and the structure of the paper is sound.\\n\\n2. The motivation is clear, and the background as well as the related works are well explained.\\n\\n3. The paper shows enough originality and novelty, by providing a novel framework for fairness and uncertainty.\\n \\n4. The experiments are extensive, comparing the performances on various datasets and different prediction tasks.\", \"weaknesses\": \"1. The connection between \\\"Fairness\\\" and \\\"Consistency/Calibration\\\" is not so clear: in the description of consistency/calibration metrics, the sensitive attributes don't seem to be included and discussed. For example, for consistency, all the metrics (SD or p-values) are computed over different ensembles, but how is it related to fairness? For calibration, it says \\\"We identify groups of individuals with similar uncertainty estimates and empirically evaluate the standard deviation of the residual difference between the observed and predicted outcomes\\\", but in the previous definition of calibration (Axiom 2.4), the group should be defined by sensitive attributes. The relationships between fairness and consistency/calibration should be further explained.\\n\\n2. The methodology is limited to ensemble methods: the consistency/calibration metrics discussed in the paper all focus on ensemble methods (i.e., XGBoost), how can this be generalized to a wider class of ML methods (e.g., neural networks with stochastic optimizers)?\\n\\n3. The experiments on fairness interventions are not very convincing: in section 5, the authors show that \\\"abstaining from binary predictions, even with improved uncertainty estimates, reduces error but does not alleviate outcome imbalances between demographic groups\\\", can the authors provide more insights into this phenomenon? This point is linked to my previous request for more explanation on the connection between fairness and uncertainty estimates.\", \"questions\": \"1. Is there more discussion on the connection between fairness and consistency/calibration? For example, how does the standard deviation across different hyper-parameters (which don't take sensitive attributes into account) help mitigate bias towards certain groups (which need information on sensitive attributes)?\\n\\n2. Can this definition of consistency/calibration be generalized to other ML methods other than ensembles?\\n\\nI would like to raise my score if these weaknesses/questions can be addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Consistency Across Subgroups** (*Axiom 2.4*) Let $\\\\mathcal{P}$ and $\\\\mathcal{P}'$ be two similar learning pipelines differing only in hyperparameters $\\\\lambda$ and $\\\\lambda'$, with $\\\\delta(\\\\lambda, \\\\lambda')$ measuring the change in hyperparameters. For all protected groups $a, a' \\\\in \\\\mathcal{A}$, the change in uncertainty estimates should not disproportionately affect one group over another. Specifically, we require: \\\\begin{equation} \\\\left| \\\\left( \\\\sigma_a - \\\\sigma_a' \\\\right) - \\\\left( \\\\sigma_{a'} - \\\\sigma_{a'}' \\\\right) \\\\right| \\\\leq L \\\\cdot \\\\delta(\\\\lambda, \\\\lambda'), \\\\end{equation} where $\\\\sigma_a = \\\\mathbb{E}_{x \\\\sim \\\\mathcal{D}a} [ \\\\sigma(x) ]$ is the average uncertainty estimate for group $a$ under pipeline $\\\\mathcal{P}$, $\\\\sigma_a' = \\\\mathbb{E}{x \\\\sim \\\\mathcal{D}_a} [ \\\\sigma'(x) ]$ is the average uncertainty estimate for group $a$ under pipeline $\\\\mathcal{P}'$, $L$ is a Lipschitz constant, and $\\\\delta(\\\\lambda, \\\\lambda')$ quantifies the change in hyperparameters.\\n\\n*Axiom 2.3* focuses on the stability of uncertainty estimates across similar learning pipelines, treating all individuals equally, without considering group membership. However, in fairness-sensitive contexts, we might want to explicitly monitor and control changes in the learning pipeline that affect different subgroups, especially protected groups defined by attributes such as race, gender, or age. This is the utility of the **Consistency Across Subgroups** definition (Axiom 2.4), which extends the notion of consistency by explicitly accounting for potential disparities between groups. It emphasizes that uncertainty estimates should not only be stable overall but also that any changes should impact all groups similarly.\\n\\nTo assess the practical implications of **Consistency Across Subgroups** (*Axiom 2.4*), we conducted experiments analyzing how hyperparameter variations affect uncertainty estimates across different protected groups. We focused on models varying in key hyperparameters, such as the maximum depth in XGBoost and the weight decay regularization parameter $\\\\alpha$ in neural networks.\\n\\n**Figures 20 and 21** illustrate the per-group average uncertainty estimates as hyperparameters are varied. Specifically, we plot the average uncertainty $\\\\sigma_a$ for each protected group $a$ against different settings of the hyperparameters. The results demonstrate whether changes in hyperparameters lead to disproportionate shifts in uncertainty estimates between groups. The experimental results reveal that, in some cases, minor hyperparameter adjustments can lead to significant differences in uncertainty estimates between groups, violating the **Consistency Across Subgroups** (*Axiom 2.4*) criterion. For example, in **Figure 20**, we observe that increasing the maximum depth in XGBoost models disproportionately increases the uncertainty estimates for one protected group compared to the other. Similarly, although to a lesser degree, **Figure 21** showed across all the datasets that adjusting the weight decay in neural networks can have unequal effects on different groups' uncertainty estimates. These experiments underline the utility in adopting **Consistency Across Subgroups** (*Axiom 2.4*) in fairness contexts, as the definition acknowledges that fairness in uncertainty estimation can be both about individual stability (*Axiom 2.3*) and also about equitable treatment across different subpopulations, depending the predictive setting and training pipeline being audited.\"}", "{\"title\": \"Thank you\", \"comment\": \"I'd like to thank the authors for their responses to all reviewers, I read their answers and would be happy to raise my score by 1.\"}", "{\"summary\": \"The paper introduces FairlyUncertain, a benchmark designed to assess the integration of uncertainty into fairness contexts within machine learning. Addressing the inherent prediction uncertainty challenge for fairness, the paper emphasizes that uncertainty estimates should ideally be consistent across similar models and calibrated to observed randomness. Through experiments across 10 datasets, including binary classification and regression tasks, the authors evaluate various methods for uncertainty estimation and introduce a new fairness metric, Uncertainty-Aware Statistical Parity (UA-SP), tailored for regression tasks. FairlyUncertain provides a structured benchmark to explore the nuanced relationship between uncertainty and fairness, specifically examining the effects of abstention and confidence thresholds.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1- FairlyUncertain introduces a standardized benchmark that is both theoretically grounded and practical, which allows researchers to evaluate how uncertainty affects fairness and vice versa. This is timely, given the increasing need for fair AI applications under uncertainty.\", \"2__the_paper_defines_fairness_focused_axioms\": \"Consistency (Axiom 2.3) and Calibration (Axiom 2.4), which set criteria for reliable uncertainty estimates. These axioms are clear and can be used for the practical goal of having fair predictive models that remain robust even under model variations.\\n\\n3- The authors provide extensive evaluations of consistency and calibration across datasets and methods. They cover both abstention and confidence thresholding. The benchmark\\u2019s focus on these uncertainty handling methods, without endorsing one as definitive, offers a structured way to assess their impact on fairness. \\n\\n4- The benchmark is open-source which could be used in future research.\\n\\n5- The benchmark\\u2019s use of consistency as a measure for similar pipelines indirectly addresses concerns about model multiplicity. FairlyUncertain's evaluation across slight hyperparameter changes serves as a practical check for this multiplicity effect.\", \"weaknesses\": \"1- Axiom 2.3 suggests that small changes in hyperparameters should not significantly impact uncertainty estimates. While this is a reasonable assumption, it would benefit from explicit clarification on the stability limits of uncertainty under various hyperparameter changes to avoid overgeneralization.\\n\\n2- Although the benchmark evaluates abstention as a fairness strategy, abstention does not reduce demographic disparities in binary classification tasks. Addressing why abstention fails to impact fairness in this context could inform potential adjustments to the benchmark. Also, given that abstention was shown not to reduce demographic disparities, the paper could address situations where abstention might inadvertently introduce biases. For example, explaining conditions under which abstention could be counterproductive would deepen the understanding of its impact on fairness.\\n\\n3- While tables like Table 2 and Table 4 capture calibration and consistency scores, the paper could strengthen its analysis by discussing why certain methods perform better/worse across different datasets. Additionally, the appendix includes valuable demographic-specific breakdowns that could be better integrated into the main analysis to improve understanding of method reliability and demographic-specific impacts.\\n\\n4- While XGBoost is effective for tabular data, expanding to deep learning models like MLPs, or transformers would align with recent trends in fairness research and test FairlyUncertain\\u2019s utility on more complex architectures. Reproducibility might vary across model architectures, especially complex ones, due to the implicit assumption of consistency under slight hyperparameter variations. While the benchmark is robust as presented, extending it to a broader model set might require tuning and validation.\", \"questions\": \"1- How does this work compare with existing research on fairness evaluation (beyond Section 6.1)? i.e., to what extent does the authors' focus on uncertainty help the practical selection of fairness measures and mitigation strategies?\\n\\n2- Could this framework be adapted for more complex models or architectures and extended to additional fairness notions, such as intrinsic fairness? If so, how?\\n\\n3- What are the societal implications of incorporating uncertainty in the selection of fair models? Could the authors expand on the strengths of their approach relative to existing methods, providing examples to illustrate the impact beyond simple numerical metrics? Existing literature on model multiplicity highlights the uncertainty involved and explores its societal, legal, and philosophical implications (there are many papers out there on this topic). I am wondering, what new insights does this paper contribute to these ongoing discussions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Revised PDF\", \"comment\": \"We thank all of the reviewers for their time and insightful feedback! We\\u2019d like to highlight that we have uploaded a revised PDF, which includes any additional experimental results run as part of our rebuttal, alongside some lightly modified text. All additional changes as part of the rebuttal are highlighted as **blue text** (for new figures/tables for the rebuttal, the captions are in blue). **Then, in our rebuttal text below, we reference figures and tables according to the numbering in this revised, rebuttal PDF.**\"}", "{\"comment\": \"We are mindful of how hectic the end of ICLR rebuttal period can be! Still, if possible, we\\u2019d be grateful for your thoughts on whether our response has adequately addressed your feedback (and, of course, if you have any further comments/questions).\"}", "{\"summary\": \"The paper introduces \\\"FairlyUncertain\\\", a benchmark to evaluate predictive uncertainty in fairness contexts. It emphasizes the need for consistency and calibration of uncertainty estimates across learning pipelines. Experiments on ten datasets reveal that a simple uncertainty estimation method outperforms prior work, that abstaining from uncertain binary predictions reduces errors but not demographic imbalances, and that incorporating calibrated uncertainty in regression improves fairness. The benchmark is extensible and aims to standardize fairness assessments involving uncertainty, promoting equitable and trustworthy machine learning practices.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Originality: Introduces FairlyUncertain, a novel benchmark for integrating uncertainty into fairness.\", \"Quality: Validated through extensive experiments across ten datasets; open-source and extensible.\", \"Clarity: Clear motivation, methodology, and presentation.\", \"Significance: Provides a foundational, scalable framework for enhancing fairness in machine learning.\"], \"weaknesses\": [\"Findings on reducing errors but not outcome imbalances could be expanded with deeper insights or additional benchmarks.\", \"The paper does not fully explore variations in models, parameters, or datasets, which limits its generalizability and insights into broader use cases.\"], \"questions\": [\"Are there plans to include additional fairness interventions?\", \"Which new calibration metrics could enhance FairlyUncertain?\", \"Why does regression improve fairness without explicit interventions?\", \"How generalizable are the results beyond the ten datasets used?\"], \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **can the authors provide more insights into how abstaining reduces error but does not alleviate outcome imbalances between demographic groups**\\n\\nW3. In response to this question, we have added the following illustrative example. Abstention is often employed as a fairness mechanism by utilizing uncertainty estimates, but our analysis shows that it does not inherently mitigate demographic disparities in binary classification tasks. In some scenarios, abstention can even cause or intensify biases. Consider a simple example involving a binary classification model evaluated on two demographic groups, $A$ and $B$, with $N_A = N_B$ examples. Both groups have the same number of positive ($Y = 1$) and negative ($Y = 0$) instances ($N_{A,1} = N_{A,0} = N_{B,1} = N_{B,0}$). Without abstention, the model performs flawlessly, achieving true positive rates (TPRs) and false positive rates (FPRs) of $\\\\text{TPR}_A = \\\\text{TPR}_B = 1.0$ and $\\\\text{FPR}_A = \\\\text{FPR}_B = 0.0$, satisfying the equalized odds criterion. Now suppose the model incorporates abstention based on uncertainty. For group $A$, the model exhibits extremely low uncertainty and continues to predict perfectly ($\\\\text{TPR}_A = 1.0, \\\\text{FPR}_A = 0.0$). However, for group $B$, the model demonstrates high uncertainty, abstaining on all positive instances ($Y = 1$), which leads to $\\\\text{TPR}_B = 0.0$, while still correctly classifying negatives ($\\\\text{FPR}_B = 0.0$). This disparity in TPRs ($\\\\text{TPR}_A - \\\\text{TPR}_B = 1.0$) introduces a severe violation of the equalized odds fairness metric. Although this is a hypothetical extreme, similar patterns of imbalance in abstention rates across subgroups appear in our experimental results, summarized in Table 4. A detailed discussion of this example is provided in Appendix E of the updated PDF (see Example~E.1).\\n\\n**In addition, we have added **Figures 18 and 19 to Appendix E** in the revised version of the paper.** These plots show how error rate decreases with a higher abstention rate when observations with high levels of uncertainty (and poor quality predictions) are abstained on. The key insight is that abstaining is independent of the protected groups and, as we see on several datasets like German and ACS Public Coverage and several methods like Selective Ensemble, abstaining can be done for one protected group at a higher rate than the other protected group, leading to **disparate error rates across protected groups**.\\n\\n> **more discussion on the connection between fairness and consistency/calibration\\u2026**\\n\\nGreat point! We have added a discussion to Section 2 of the paper, and added a version of it below for easy access.\\n\\nUncertainty estimation is widely recognized as a crucial aspect of transparent machine learning practices (Bhatt et al., 2021; Hendrickx et al., 2024). But how is it connected to the concept of fairness? Uncertainty arises from variability in training data and randomness in learning algorithms, leading to a distribution of possible models rather than a single deterministic one. Ignoring this distribution risks making arbitrary decisions, especially for individuals whose predictions might vary across modeling decisions or other sources of uncertainty. Such arbitrariness could disproportionately and unfairly affect minority groups in data Tahir et al. (2023). \\n\\nRecent work has demonstrated that state-of-the-art fairness interventions can exacerbate predictive arbitrariness; models with similar fairness and accuracy performance but different parameters can assign vastly different predictions to individuals, and this arbitrariness is intensified by fairness constraints (Long et al., 2024; Cooper et al., 2024). Our axiom of consistency for fair uncertainty estimation builds upon this insight by asserting that uncertainty estimates should not vary significantly across similar learning pipelines. Furthermore, our axiom of calibration aims to prevent systematic biases in uncertainty estimates that could disadvantage certain groups. For instance, if uncertainty is consistently underestimated for a particular group, the model may overstate its confidence in predictions for that group, leading to unfair treatment (Ali et al., 2021). This leads us to argue that adhering to the axioms of consistency and calibration are necessary tenets of a fair uncertainty estimation process\\n\\n> **can this definition of consistency/calibration be generalized to other ML methods?**\\n\\nQ2. Certainly! There is nothing specific to ensembles about the method. As such, we defined consistency and calibration generally so they could apply to other machine learning models. For our experiments, we used XGBoost because it was more performant. At your request, we have also rerun experiments with a neural network, as mentioned above; overall, we find that the neural network is slower than XGBoost and produces very similar results.\"}", "{\"comment\": \"Sorry for the delay, was away for Thanksgiving. With a reframe of the paper to include this idea substantially in terms of that definition both theoretically and experimentally, and providing clarity around the separation of the two definitions and how they provide differing criteria towards robustness and fairness respectively, I'd be happy to raise my score. I'm not sure this is a reasonable ask in the review process though.\"}", "{\"comment\": \"Oh shoot, it appears as though you are unable to add a new response today, even though the review period is still open. However, you should still be able to update your score if you choose to do so. Were you to raise your score, we would take that as a signal that you feel positively about our proposed changes in response to your suggestions. Thank you again for your time and review!\"}" ] }
C1E0Oo5qgK
Compress Guidance in Conditional Diffusion Sampling
[ "Anh-Dung Dinh", "Daochang Liu", "Chang Xu" ]
We found that enforcing guidance throughout the sampling process is often counterproductive due to the model-fitting issue, where samples are `tuned' to match the classifier’s parameters rather than generalizing the expected condition. This work identifies and quantifies the problem, demonstrating that reducing or excluding guidance at numerous timesteps can mitigate this issue. By distributing a small amount of guidance over a large number of sampling timesteps, we observe a significant improvement in image quality and diversity while also reducing the required guidance timesteps by nearly 40\%. This approach addresses a major challenge in applying guidance effectively to generative tasks. Consequently, our proposed method, termed Compress Guidance, allows for the exclusion of a substantial number of guidance timesteps while still surpassing baseline models in image quality. We validate our approach through benchmarks on label-conditional and text-to-image generative tasks across various datasets and models.
[ "Diffusion model", "guidance", "generative models", "compact diffusion" ]
https://openreview.net/pdf?id=C1E0Oo5qgK
https://openreview.net/forum?id=C1E0Oo5qgK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "XpPwrNlvEt", "PY4gy78yoN", "KJt07rjNDT", "KCMWVPGm7U", "JwXN5occn6", "7xTjHooypp" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730722079127, 1730631818677, 1730741445729, 1730264001014, 1730757597975, 1731657599928 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3471/Reviewer_kim7" ], [ "ICLR.cc/2025/Conference/Submission3471/Reviewer_upNj" ], [ "ICLR.cc/2025/Conference/Submission3471/Reviewer_dhbC" ], [ "ICLR.cc/2025/Conference/Submission3471/Reviewer_f7Bg" ], [ "ICLR.cc/2025/Conference/Submission3471/Reviewer_z6tK" ], [ "ICLR.cc/2025/Conference/Submission3471/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper reveals that enforcing guidance throughout the sampling process can be counterproductive. It identifies a model-fitting issue where samples are tuned to match classifier parameters instead of generalizing the expected condition. It shows that reducing or excluding guidance at numerous timesteps can mitigate this problem. By distributing a small amount of guidance over many sampling timesteps, the authors observe significant improvements in image quality and diversity. Their proposed method, Compress Guidance, reduces required guidance timesteps by nearly 40% while surpassing baseline models in image quality.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Pros:\", \"The paper addresses a significant issue: guidance in diffusion models.\", \"The experiments are well-structured and organized.\"], \"weaknesses\": [\"Cons:\", \"The motivation for compressing guidance is unclear. The paper fails to adequately demonstrate through experiments the weaknesses of uncompressed guidance, such as model-fitting issues and poor image quality.\", \"Distributing guidance across different timesteps presents a vast search space, which is a significant challenge.\", \"The method described in section 3.3 doesn't seem as simple as claimed in the paper's contributions.\", \"The table format is uncomfortable to read and appears inconsistent with the ICLR template and other papers.\"], \"minor_issue\": \"There's a missing citation on line 808.\", \"questions\": \"as above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper analyzes the model-fitting problem and finds that the relatively less guidance for diffusion is more important for model performance.\\n\\nThis paper proposes a simple yet quite effective method to address model-fitness.\\n\\nThis paper also provides detailed experimental results to validate their hypothesis.\\n\\nApart from the performance, the method shows faster convergence speed and save training cost.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This work provides clear theoretical and experimental explanations for model-fitting. The explanation is convincing.\\n\\n2. The method is simple but effective, it is not hard to implement technically.\\n\\n3. This paper includes sufficient experiments including U-Net and Transformer-based Diffusion models.\", \"weaknesses\": \"1. As the key contribution and observation of this paper, the experiments of model-fitting shown in Figure 2 are not well explained (such as the details of the model you use, and the dataset setting). Moreover, the main concern is whether the conclusion from Figure 2 is still valid on different model architectures and different datasets.\\n\\n2. The main assumption of this method is that the gradient of guidance should be concentrated in the early stages. Ignoring the latter stage guidance means less detailed information relative to the class will be present in the final images. But in the images with more detailed information, this will lose more fine-grained elements in the images. What's your comment on this?\\n\\n3. Based on the above discussion, I think a trade-off curve of the skipped steps(or GPU hours to model converge) and the final performance should be included in the paper.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose the model-fitting problem in conditional diffusion models and propose a solution for model-fitting using compress guidance.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"How they set up the problem of model fitting and on-sampling/off-sampling loss can be novel, but with current presentation, it is hard to understand.\", \"weaknesses\": \"The paper seems to be written in haste. There are too many typos and grammatical errors which hurts reading. A lot of references of table and figure are mis-referenced, which adds another barrier of difficulty. A lot of quotation marks are wrong.\\n\\nThe main idea they propose here is model fitting, but reading their definition, I can't quite get what they exactly mean by model fitting. It is too loosely defined. Also, in off-sampling loss, I don't know what phi' is and how it's obtained. With this missing insight, I don't know how to interpret their Table 1, Figure 2, and evidence for model fitting. \\n\\nI also find they are lacking in literature review. The role of CFG and its improved version of it have been studied extensively recently, e.g., [1-4]. Especially, [3] and [4] discuss CFG scheduling, which is very related to what they're doing in Compress Guidance. \\n\\n\\n[1] Chung, Hyungjin, et al. \\\"CFG++: Manifold-constrained Classifier Free Guidance for Diffusion Models.\\\" arXiv preprint arXiv:2406.08070 (2024).\\n[2] Sadat, Seyedmorteza, et al. \\\"No Training, No Problem: Rethinking Classifier-Free Guidance for Diffusion Models.\\\" arXiv preprint arXiv:2407.02687 (2024).\\n[3] Wang, Xi, et al. \\\"Analysis of Classifier-Free Guidance Weight Schedulers.\\\" arXiv preprint arXiv:2404.13040 (2024).\\n[4] Yoon, Youngseok, et al. \\\"Model Collapse in the Self-Consuming Chain of Diffusion Finetuning: A Novel Perspective from Quantitative Trait Modeling.\\\" arXiv preprint arXiv:2407.17493 (2024).\", \"questions\": \"In Evidence 3, all three rows latch onto the orange color as an important feature. It is counterintuitive that with more guidance, the image starts to become more different from the intended object.\\n\\nOn-sampling loss and Off-sampling loss are defined with classifier parameters. How are they relevant to classifier-free guidance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper analyzes the overfitting problem of classifier gradient in guided sampling of diffusion models, claiming that guidance at every denoise step might harm the conditional fidelity. In order to avoid too much gradient, compared to existing Early Stopping or Uniform Skipping techniques, the authors propose a novel trick, named as Compress Guidance, providing sufficient guidance resembling momentum method.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper is well structured, and the motivation is clear.\", \"The comparison by employing Early Stopping and Uniform Skipping is quite intuitive and easy to follow.\", \"The novelly proposed Compress Guidance encourages further works delving into guided sampling theory.\"], \"weaknesses\": \"- There are plenty of theoretical flaws in the paper, and some conclusions are not that obvious to draw but with no clarification.\\n 1. Proof of Thm. 1 is wrong. Forward process of diffusion model is technically a Markovian process, therefore, one cannot assume that two $\\\\mathbf{x}_t$ at different timesteps $t_1$ and $t_2$ are diffused with the same noise $\\\\epsilon$. Besides, noise prediction is not independent with $\\\\epsilon$, so one cannot assume that $|\\\\epsilon - \\\\epsilon(\\\\mathbf{x}_t,t)|$ has consistent approximate $\\\\Delta$ at different $t$.\\n 2. Conclusion in Thm. 1 cannot be generalized to any data distribution. In L144, $\\\\epsilon(\\\\mathbf{x}_t,t)\\\\sim\\\\epsilon$ is wrong in general case. Noticeably, $\\\\epsilon(\\\\mathbf{x}_t,t)=\\\\mathbb{E}[\\\\epsilon|\\\\mathbf{x}_t]$, and no further evidence on the distribution of noise prediction. Besides, one can easily calculate the score function (and hence the ground-truth noise prediction $\\\\epsilon(\\\\mathbf{x}_t,t)$) under Gaussian Mixture setting, e.g., $\\\\mathbf{x}_0\\\\sim\\\\mathcal{N}(1,1)+\\\\mathcal{N}(10,1)$.\\n 3. I cannot understand Eq. (10), why the term is proportional to gradient of KL divergence? There is no detailed explanation for L193, why is the full form added with a coefficient $q(y)$ and why is it equivalent to another gradient?\\n\\n- The analysis in Sec. 3.1 is less convincing due to logical error. If there is model-fitting problem, as the paper claims, why the conclusion from off-sampling loss is credible? What if on-sampling classifier is somewhat more ground-truth but the off-sampling loss is wrong caused by the model-fitting problem on the off-sampling classifier? Or in other words, if classifiers are not credible, all conclusions in Sec. 3.1 are not credible since they are drawn by analysis using classifiers.\\n\\n- The whole pipeline makes no sense to me. As stated in Sec. 3.1, guidance using gradient from classifier may be harmful since model-fitting problem on classifiers. Then why using gradient from previous steps will be less harmful? The denoising process still employs gradient guidance at every step, and I cannot tell the superiority of involving previous step or a summation. Why the classification result at $t=1000$ still works for $t=950$?\\n\\n- The experimental results are also not that convincing.\\n 1. First, if the on- and off-sampling analysis are correct, then ES most reduces the gap between on- and off-sampling loss, indicating it outperforms the proposed method. Besides, why use 150 NFEs rather than 250 like before? Why ES on-sampling loss does not reduce first and then increase as in Fig. 4? The authors may need further clarification.\\n 2. Second, visualization in Fig. 6 fails to demonstrate the outperformance, where the novel method generates samples with obvious artifacts.\\n 3. There are no comparison on FID between CompG, ES and UG. Also will it still be the case when using NFE = 50, 25 or even less?\\n\\n- The writing is poor and hard to read with too many typos: L90 missing $t$ in subscript, L96 no right parenthesis, Eqs. (5,6) should be $\\\\sigma_t^2\\\\mathbf{I}$, L120 no $t$ in $\\\\epsilon_\\\\theta$, L159 missing parentheses.\", \"questions\": [\"As stated in Weaknesses, could the authors make it more detailed in Sec. 3 the theory part, especially gradient of KL divergence?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper identifies a model fitting problem in classifier guidance used in conventional diffusion models and proposes a solution called compress guidance. The model fitting issue is illustrated by showing that while the on-sampling loss decreases, the off-sampling loss trend differs completely, indirectly demonstrating that sampling is fit to the parameters of the guidance model. Compress guidance addresses this by compressing the duplicated gradients, mitigating the fitting problem.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper presents the model fitting problem of classifier guidance with experimental evidence.\\n2.\\tIt proposes various alternative methods to address this problem, demonstrating that the proposed compress guidance is the most effective.\\n3.\\tThe approach shows numerical improvements across various models and guidance scenarios.\", \"weaknesses\": \"1.\\tLack of analysis regarding the relationship with the ODE sampler: This method inherently requires more sampling steps to function effectively. A performance comparison based on SNR variation via the ODE sampler seems necessary.\", \"questions\": \"Could you provide the full algorithm?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Dear reviewers,\\n\\nWe thank the reviewers' efforts in reviewing our paper. \\n\\nWe generally agree that due to the limited space, we present the model-fitting problem too densely, confuses reviewers dhbC and kim7, and actually, a number of details have been left in the Appendix that answer most of the questions from upNj and j7Bg. While I mostly do not agree with the interpretation of reviewer j7Bg about our theoretical part, we agree that the writing of the notation, especially the $\\\\epsilon$, should be re-wrote carefully to avoid confusion.\\n\\nYour comments help us to improve the manuscripts for the next submission.\\n\\n\\nBest regards,\"}" ] }
C0Ubo0XBPn
Hierarchical Information Flow for Generalized Efficient Image Restoration
[ "Yawei Li", "Bin Ren", "Jingyun Liang", "Rakesh Ranjan", "Mengyuan Liu", "Nicu Sebe", "Ming-Hsuan Yang", "Luca Benini" ]
While vision transformers show promise in numerous image restoration (IR) tasks, the challenge remains in efficiently generalizing and scaling up a model for multiple IR tasks. To strike a balance between efficiency and model capacity for a generalized transformer-based IR method, we propose a hierarchical information flow mechanism for image restoration, dubbed Hi-IR, which progressively propagates information among pixels in a bottom-up manner. Hi-IR constructs a hierarchical information tree representing the degraded image across three levels. Each level encapsulates different types of information, with higher levels encompassing broader objects and concepts and lower levels focusing on local details. Moreover, the hierarchical tree architecture removes long-range self-attention, improves the computational efficiency and memory utilization, thus preparing it for effective model scaling. Based on that, we explore model scaling to improve our method's capabilities, which is expected to positively impact IR in large-scale training settings. Extensive experimental results show that Hi-IR achieves state-of-the-art performance in seven common image restoration tasks, affirming its effectiveness and generalizability.
[ "Hierarchical information flow", "image restoration", "tree structure" ]
Reject
https://openreview.net/pdf?id=C0Ubo0XBPn
https://openreview.net/forum?id=C0Ubo0XBPn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yjVMtsNRlW", "ufKbeaKfhl", "u4t8z5nyLM", "tgY4eUMGUK", "sowRVbGZf3", "qEssrSqe1v", "pkWAKzW4EQ", "pBCp2uvlOw", "nVP96MS6BT", "lgdQ2JFhQt", "l77B1Rw6uH", "jnwAz6NPkd", "ibE48rLSuY", "hqBtdFEBXo", "fc6fA2ngat", "cHcXsaJwns", "RC6U0SeUGI", "P213Tv2Gcp", "L0oWJPeS0e", "KhhgPQLAUl", "JrUAPWWuU7", "HsJizdDWXo", "HmygBC3r4R", "DZOmq9Y5p8", "D8GWUmjHyG", "C6AMGv84Ux", "BSYXFEXgGD", "5SjyAbtvfe", "2xbtXsJdqy", "0DiCrajlrL" ], "note_type": [ "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1739980729126, 1732219368233, 1732609206257, 1732217046482, 1732218957537, 1732347520370, 1732517955441, 1732217188710, 1732518168719, 1730354365349, 1732218121645, 1730708701970, 1732612850004, 1732518112894, 1732216358689, 1732583426486, 1732216417340, 1732518030447, 1732542969698, 1732347431467, 1734458016210, 1732584802478, 1732583448090, 1737523504402, 1732219347379, 1730607559245, 1732583411437, 1732347113172, 1730633492391, 1732347312849 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Reviewer_3HDo" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Reviewer_3HDo" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Reviewer_99w8" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Reviewer_VDnP" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Area_Chair_M8S2" ], [ "ICLR.cc/2025/Conference/Submission2451/Reviewer_wAuz" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Reviewer_VDnP" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ], [ "ICLR.cc/2025/Conference/Submission2451/Reviewer_wAuz" ], [ "ICLR.cc/2025/Conference/Submission2451/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Author Rebuttal (Part 2 / 2): Additional table\", \"comment\": \"Table 3: Model scaling-up exploration.\\n| Scale | Model size | Warmup | Conv Type | Set5 | Set14 | BSD100 | Urban100 | Manga109 |\\n|-----------|------------|--------|-----------|-------|-------|--------|----------|----------|\\n| $3\\\\times$ | 15.87 | No | conv1 | 35.06 | 30.91 | 29.48 | 30.02 | 34.41 |\\n| $3\\\\times$ | 57.78 | No | conv1 | 34.7 | 30.62 | 29.33 | 29.11 | 33.96 |\\n| $3\\\\times$ | 57.78 | Yes | conv1 | 34.91 | 30.77 | 29.39 | 29.53 | 34.12 |\\n| $3\\\\times$ | 54.41 | Yes | linear | 35.13 | 31.04 | 29.52 | 30.2 | 34.54 |\\n| $3\\\\times$ | 55.91 | Yes | conv3 | 35.14 | 31.03 | 29.51 | 30.22 | 34.76 |\\n| $4\\\\times$ | 15.84 | No | conv1 | 33.00 | 29.11 | 27.94 | 27.67 | 31.41 |\\n| $4\\\\times$ | 57.74 | No | conv1 | 33.08 | 29.19 | 27.97 | 27.83 | 31.56 |\\n| $4\\\\times$ | 57.74 | Yes | conv1 | 32.67 | 28.93 | 27.83 | 27.11 | 30.97 |\\n| $4\\\\times$ | 54.37 | Yes | linear | 33.06 | 29.16 | 27.99 | 27.93 | 31.66 |\\n| $4\\\\times$ | 55.88 | Yes | conv3 | 33.06 | 29.16 | 27.97 | 27.87 | 31.54 |\", \"table_5\": \"Dot production attention vs. cosine similarity attention for model scaling. PSNR reported for SR.\\n| Scale | Attn. type | Set5 | Set14 | BSD100 | Urban100 | Manga109 |\\n|-----------|-------------|-------|-------|--------|----------|----------|\\n| $3\\\\times$ | cosine sim. | 34.92 | 30.86 | 29.40 | 29.82 | 34.18 |\\n| $3\\\\times$ | dot prod. | 34.98 | 30.98 | 29.45 | 30.06 | 34.35 |\\n| $4\\\\times$ | cosine sim. | 33.08 | 29.15 | 27.96 | 27.90 | 31.40 |\\n| $4\\\\times$ | dot prod. | 33.14 | 29.09 | 27.98 | 27.96 | 31.44 |\"}", "{\"title\": \"Official Comment by Reviewer 3HDo\", \"comment\": \"Thank you for your detailed responses.\\nThe novelty of the proposed information flow is relatively limited, which I believe falls short of the acceptance standards of ICLR. Moreover, the proposed information flow does not demonstrate a clear connection to being \\\"Generalized and Efficient,\\\" which I believe is an overstatement.\\nI have decided to maintain my rating.\"}", "{\"title\": \"Author Rebuttal (Part 1 / 1)\", \"comment\": \"### Q1: Permutation operation\\n***Ans:*** Thanks for the suggestion. The detail of the permutation operation is explained in the response to the [common question Q1](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=fc6fA2ngat). The permutation is done in a deterministic way. Removing the permuation causes the L2 information flow to degrade into L1 information flow, which isolates information as discussed in Line 52 and Table 1 of the main paper. As requested, we did an ablation study to show this effect. The following abation study shows that the permuation operation is important to maintain the performance of the network. \\n\\n| Dataset | Set5 | Set14 | BSD100 | Urban100 | Manga109 |\\n|------------------|-------|-------|--------|----------|----------|\\n| With Permutation | 38.56 | 34.79 | 32.63 | 34.49 | 39.89 |\\n| W/O Permutation | 38.51 | 34.73 | 32.59 | 34.21 | 39.63 |\\n\\n### Q2: Depth of tree structure and model complexity\\n\\n***Ans:*** Thanks for the insighful comments. Actually, we have done this ablation study to investigate the complexity and efficiency of the proposed tree structure in Section 5.1. And the result is shown in Table 6. To summarize, we have done the following experiments.\\n1. We investigated three versions of the information flow mechanism with different complexities in Figure 4. In Version 1, we use L1 information flow and L2 information flow alternatively in the consecutive Transformer layers. In Version 2 and 3, both L2 and L3 information flow are implemented in a single Transformer Layer. Version 3 differs from Version 2 in that the projection layer between the L1 and L2 attention is removed and the embedding dimension of $Q$ and $K$ is reduced by half. This leads to models with different complexities including Version 1 (\\\\~14.5M), Version 2 (\\\\~19.37M), and Version 3 (\\\\~15.84M). \\nCompared with Version 1, two consecutive Transformer Layers in Version 2 and Version 3 have two more information flow attention operations, which can be regarded as a deeper tree. Yet, the performance of Version 2 becomes worse with increased model complexity, which is due to the design. By contrast, Version 3 works better than Version 1 with deepened tree structure and increased model complexity. Considering the tradeoff between PSNR gains and model complexity, we use Version 3 throughout this paper. *In short, well-designed deeper tree structures lead to improved model performance but with increased model complexity.*\\n2. In addition to the investigation of L1 and L2 information flow, we also ablate the effects of L3 information flow. Thus, we remove L3 information flow from the network for all the three versions mentioned above, which leads to tree structures with reduced depth. In particular, the depth of the tree is reduces to 2 for the Version 1 model. By comparing the *'with L3'* and *'w/o L3'* columns of Table 3, we conclude the importance of L3 information flow.\\n3. Moreover, during the rebuttal phase, we implemented a deeper tree structure similar to Version 1. We added another information flow attention. Thus, three information flow attention operations alternate in the network, and propagate information in a $8\\\\times 8$, $64 \\\\times 64$, and $256 \\\\times 256$ patch, respectively. The experimental results for the tree structure with depth 4 and another two tree structures with depth 2 and 3 discussed above are shown. The performance of the tree structure with depth of 4 is improved but the model size is also increased.\\n\\n| Tree depth | 2 | 3 | 4 |\\n|----------------|-------|-------|-------|\\n| $2\\\\times$ | 38.31 | 38.34 | 38.41 |\\n| Model size [M] | 11.87 | 14.35 | 17.19 |\\n| | | | |\\n| $4\\\\times$ | 32.85 | 32.89 | 32.95 |\\n| Model size [M] | 12.02 | 14.50 | 17.34 |\\n\\n### Q3: Addtional feedback\\n***Ans:***: All the addtional feebacks are addressed in the revised version.\"}", "{\"title\": \"Author Rebuttal\", \"comment\": \"### Q1: The details of the permutation operation\\n***Ans:*** We explained the permutation operation in detail in the response to the [common question Q1](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=fc6fA2ngat). The paper is also revised accordingly.\\n\\n### Q2: Technical novelty\\n***Ans:*** Our response to this question is detailed in the response to the [common question Q2](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=RC6U0SeUGI). In short, the proposed hierarchical information flow is different from the previous method in the following aspects. \\n1. It propagates information from local to global with controlled computational complexity. Whereas, window attention is mainly conducted for local patches, e.g. $8\\\\times 8$.\\n2. L2 information flow attention propagates information at an elevated level while enhancing pixel relevance by constraining the size of the enlarged region. This capability is not achieved by previous methods like ShuffleFormer and Shuffle Transformer.\\n3. We propose three strategies to systematically guide model scaling-up including removing heavyweight $3\\\\times 3$ convolution, warmup, and using dot product attention. Theoretical analysis of why these strategies work is added to Appendix B.\\n\\n### Q3: Complexity analysis of the proposed hierarchical information flow\\n***Ans:*** Thanks a lot for the suggestion. We conducted several analyses on the complexity of the proposed hierarchical information flow in Section 5.1.\\n1. We explored two approaches to implementing L1 and L2 information flow in Transformer layers: alternating them across consecutive layers or integrating both within a single layer. Our findings indicate that integrating L1 and L2 information flow within the same layer enhances performance. The designed Version 3 unifies L1/L2 information flows conceptually into a single flow with an expanded receptive field.\\n2. We examined the impact of tree structure depth, investigating depths of 2, 3, and 4. The results show that increasing the depth improves performance but at the cost of higher computational complexity. Balancing efficiency and accuracy, we selected a tree structure with a depth of 3.\\n3. In addition, we also scale up the model with over 100M parameters. This additional experiment validates the potential to further increase the model size.\\n\\n### Q4: Minor points\\n***Ans:*** All the minor points mentioned by the reviewer are addressed in the revised version of this paper.\\n\\n### Q5: Further scaling up model to 100M parameters.\\n***Ans:*** Thanks for the comments. The embedding dimension was increased to 256, and the number of Hi-IR transformer stages was set to 12, with 12 transformer layers in each stage. The model was trained for 200k iterations. The experimental results are presented below. While the network has not yet reached full convergence, the early results indicate that training progresses well as the model is further scaled up. Consistent improvements are also observed for the larger model.\\n\\n| Scale | Params. | Set5 | Set14 | BSD100 | Urban100 | Manga109 |\\n|-----------|---------|-------|-------|--------|----------|----------|\\n| $2\\\\times$ | 14.68M | 38.56 | 34.79 | 32.63 | 34.49 | 39.89 |\\n| $2\\\\times$ | 110.90M | 38.72 | 35.19 | 32.75 | 35.04 | 40.97 |\\n\\n### Q6: Colors in Figure 2\\n***Ans:*** The different colors represent local information. The blending of colors at higher levels in the figure indicates that information gradually propagates beyond the local patch. This information is updated in the caption of this figure.\"}", "{\"title\": \"Please let us know if you have additional questions\", \"comment\": \"Dear Reviewer [3HDo](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=lgdQ2JFhQt),\\n\\nThank you for the comments on our paper.\\n\\nWe have provided a response and a revised paper on Openreview based on the comments. Since the discussion phase ends on Nov. 26, we would like to know whether we have addressed all the issues. Please consider raising the scores after this discussion phase.\\n\\nThank you\"}", "{\"title\": \"Updated response and pdf file\", \"comment\": \"Dear Reviewer [99w8](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=jnwAz6NPkd),\\n\\nWe have updated our response and the PDF file to provide better reference. As the deadline for the discussion phase is approaching, please feel free to let us know if you have any further questions.\\n\\nThank you very much.\"}", "{\"title\": \"Author Rebuttal (Part 2 / 2)\", \"comment\": \"### Q4: Model scaling-up strategies\\n***Ans:***: Thanks for the comments. We provide deeper analysis as follows. We also updated the analysis in Section B.2 and B.3 of the Appendix.\\n1. The heavyweight $3 \\\\times 3$ convolutions appears at the end the the Hi-IR stage as shown in Fig. 3c. \\n2. *Warmup* is effective for training large models primarily because it mitigates issues related to unstable gradients and helps the optimizer gradually adapt to the model's large parameter space [1,2]. In the early stages of training, the model's parameters are initialized randomly. A high learning rate at this stage can cause large updates, leading to unstable or divergent training due to exploding or vanishing gradients. Warmup starts with a small learning rate and gradually increases it, allowing the optimizer to find a stable path in the loss landscape before applying larger updates. Warmup enables the model to adapt gradually, avoiding overshooting minima and ensuring smoother convergence.\\n3. *Dot product v.s. cosine similarity*: We analyze the gradient of dot product and cosine similary as follows. Suppose $\\\\mathbf{q}$ denotes the query and $\\\\mathbf{k}$ denotes the keys. Then dot product and cosine similarity between $\\\\mathbf{q}$ and $\\\\mathbf{k}$ are denoted as $\\\\text{dot\\\\\\\\_prod}(\\\\mathbf{q}, \\\\mathbf{k})$, $\\\\text{cos\\\\\\\\_sim}(\\\\mathbf{q}, \\\\mathbf{k})$. \\n\\n - The gradient of dot product with respect to $\\\\mathbf{q}$ is denoted as $\\\\frac{\\\\partial}{\\\\partial \\\\mathbf{q}} \\\\text{dot\\\\\\\\_prod}(\\\\mathbf{q}, \\\\mathbf{k}) = \\\\mathbf{k}$.\\n - The gradient of cosine similarity with respect to $\\\\mathbf{q}$ is $\\\\frac{\\\\partial}{\\\\partial \\\\mathbf{q}} \\\\text{cos\\\\\\\\_sim}(\\\\mathbf{q}, \\\\mathbf{k}) \\n = \\\\frac{\\\\mathbf{k}}{\\\\|\\\\mathbf{q}\\\\| \\\\|\\\\mathbf{k}\\\\|} - \\\\frac{(\\\\mathbf{q} \\\\cdot \\\\mathbf{k}) \\\\mathbf{q}}{\\\\|\\\\mathbf{q}\\\\|^3 \\\\|\\\\mathbf{k}\\\\|} \\n = \\\\frac{1}{\\\\|\\\\mathbf{q}\\\\|} \\\\left(\\\\mathbf{\\\\hat{k}} - \\\\text{cos\\\\\\\\_sim}(\\\\mathbf{q}, \\\\mathbf{k}) \\\\mathbf{\\\\hat{q}}\\\\right)$, where $\\\\mathbf{\\\\hat{q}}$ and $\\\\mathbf{\\\\hat{k}}$ are normalized $\\\\mathbf{q}$ and $\\\\mathbf{k}$. \\n The gradients with respect to $\\\\mathbf{k}$ have the similar form. The gradient of cosine similarity involves more terms compared to the gradient of the dot product. This increased complexity in the gradient of cosine similarity makes it more prone to producing large or even unstable gradient values. We conducted a numerical analysis of the gradient values for the two attention methods, with the results presented in Figure 9 of the Appendix. As shown in the figure, the gradient of cosine similarity is indeed more prone to producing large values. This issue becomes more pronounced as the model scales up.\\n\\n[1] Goyal, P. \\\"Accurate, large minibatch SG D: training imagenet in 1 hour.\\\" arXiv preprint arXiv:1706.02677 (2017).\\n\\n[2] Kalra, Dayal Singh, and Maissam Barkeshli. \\\"Why Warmup the Learning Rate? Underlying Mechanisms and Improvements.\\\" arXiv preprint arXiv:2406.09405 (2024).\\n\\n### Q5: Rationale for model design in ablation study\\n***Ans:***: Thanks a lot for the question. We align the text better with the aim. This ablation serves two purposes: 1) exploring a better design to place the L1/L2/L3 information flow mechanisms; 2) investigating the influence of the tree depth. When designing the L1/L2 information flow attention mechanism, we need to decide whether to interleave L1/L2 information flow across Transformer layers or to implement them in the same layer. To validate this choice, we developed Version 1 and Version 2. However, Version 2 demonstrated reduced performance despite increased model complexity. To address this issue, we introduced Version 3, where L1 and L2 information flows can be conceptually unified into a single flow with a larger receptive field. We revised the corresponding texts in Section 5.1.\"}", "{\"title\": \"Updated response and PDF file\", \"comment\": \"Dear Reviewer [3HDo](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=lgdQ2JFhQt),\\n\\nWe have updated our response and the PDF file to provide better reference. As the deadline for the discussion phase is approaching, please feel free to let us know if you have any further questions.\\n\\nThank you very much.\"}", "{\"summary\": \"This paper proposes a hierarchical information tree structure to represent degraded images across three levels, aiming to balance efficiency and model capacity. The hierarchical tree architecture also supports effective model scaling. Extensive experiments are conducted to validate the approach.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-motivated and easy to read.\\n2. This paper studies the model scaling problem in image restoration, which is valuable. \\n3. Extensive experiments have been conducted.\", \"weaknesses\": \"Although the paper presents a compelling narrative, the challenge lies in balancing the scope and complexity of window attention while improving global information propagation efficiency, a topic that has been widely studied in recent years [1]. The proposed solution\\u2014window self-attention, permutation, and window self-attention\\u2014follows a similar approach as in [2]. These methods are missed and are not discussed. Please explicitly compare the hierarchical information flow mechanism with the random shuffle and spatial shuffle approaches in [1] and [2],and highlight key differences or advantages. Additionally, the paper appears somewhat rushed, with several mistakes. For example, in line 244, \\\"Fig.3(a)\\\" not \\\"Fig.3(c)\\\", and in line 218, \\\"Convplutional\\\" is misspelled.\\n\\n[1] Random shuffle transformer for image restoration, ICML2023.\\n[2] Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer\", \"questions\": \"Why are the experiments in Tables 3, 4, 5, 6, and 7 conducted on different datasets?\\nWhy are 2\\u00d7 and 3\\u00d7 scales used in Table 3, while 2\\u00d7 and 4\\u00d7 scales are used in Table 5?\\nPlease provide a brief explanation in the paper for why different datasets and scale factors were chosen for each set of experiments, and how this impacts the interpretation of the results. This would help improve the paper's clarity and consistency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Rebuttal\", \"comment\": \"### Q1: Novelty of the paper\\n***Ans:*** Thanks a lot for the comment. We address the reviewer's concern in the response to the [common question Q2](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=RC6U0SeUGI). We highlight several points in the following. First, although we used columnar architecture for SR and UNet architecture for the other tasks, we did not claim that the U-Net architecture is a contribution of this paper. Those are standard choices following the literature. Second, the hierarchical information flow proposed in this paper can be regarded as an efficient self-attention mechanism. The new mechanism has the following advantages: 1) propagating information from local to global in a progressive and efficient way; 2) by constraining the practical patch size in each level, the proposed method promotes the relevance of pixels in the same level, which can be a problem for ShuffleFormer and Shuffle Transformer. The detailed comparison with ShufflerFormer and Shuffle Transformer is done as suggested by [Reviewer 3HDo](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=lgdQ2JFhQt). Third, the main contribution of Uformer is validating the UNet architecture for Transformers while the main contribution of Restormer is the self-attention along the channel dimension. Both of them are different from the proposed hierarchical information flow in this paper.\\n\\n### Q2: Claim of generalization and efficiency\\n***Ans:*** Thank you for the valuable feedback. Our original intention was to convey that generalizing to different IR tasks requires careful consideration of the unique properties of each task. Simply combining computational mechanisms designed for different IR tasks does not necessarily result in an efficient solution. We have revised the statements in the paper to better reflect this perspective. \\n\\n### Q3: Details of the tree-based information flow\\n***Ans:*** Thanks for the comment. We explain the details of the tree-based information flow and the corresponding permutation operation in the response to the [common question Q1](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=fc6fA2ngat). We also updated the paper accordingly.\\n\\n### Q4: Handling images with various degradations\\n***Ans:*** We thank the reviewer for the suggestions. To validate the generalization capability of the proposed method to different types of degradation, we conducted the following experiments. First, we used the same model for both denoising and JPEG compression artifact removal tasks. Notably, a single model was trained to handle varying levels of degradation. Second, we performed experiments on image restoration under adverse weather conditions, including rain, fog, and snow. Third, we further investigated a one-in-all image restoration setup, encompassing five different tasks with real-world images. The experimental results demonstrate that the proposed method outperforms previous methods by a significant margin. These three sets of experiments collectively highlight that the proposed hierarchical information flow mechanism enables training a single model that generalizes effectively to various types and levels of degradation. We updated the experiments and results in Appendix C.4.\\n\\n| Method | Params. | Dehazing | Deraining | Denoising | Deblurring | Low-Light | Average |\\n|--------------|---------|----------|-----------|-----------|------------|-----------|---------|\\n| AirNet | 9M | 21.04 | 32.98 | 30.91 | 24.35 | 18.18 | 25.49 |\\n| IDR | 15M | 25.24 | 35.63 | 31.6 | 27.87 | 21.34 | 28.34 |\\n| PromptIR | 33M | 26.54 | 36.37 | 31.47 | 28.71 | 22.68 | 29.15 |\\n| AdaIR | 29M | 30.53 | 38.02 | 31.35 | 28.12 | 23.00 | 30.20 |\\n| Hi-IR (Ours) | 22M | 31.42 | 38.67 | 31.58 | 28.95 | 23.12 | 30.75 |\"}", "{\"summary\": \"This paper introduces Hi-IR, a hierarchical information flow mechanism structured as a tree for image restoration. In Hi-IR, information moves progressively from local areas, aggregates at multiple intermediate levels, and then spreads across the entire sequence. By using this hierarchical tree design, Hi-IR eliminates long-range self-attention, enhancing computational efficiency and memory usage. Extensive experiments across seven image restoration tasks demonstrate the effectiveness and generalizability of Hi-IR.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The motivation for this paper is twofold: (1) information flow plays a pivotal role in decoding low-level features, and (2) it is not always necessary to implement information flow through fully connected graphs. Proposing hierarchical information flow via tree-structured attention is a reasonable approach to balancing complexity and the efficiency of global information propagation.\", \"The paper proposes a model scaling-up solution, such as replacing heavyweight $3 \\\\times 3$ convolutions with lightweight operations. The authors reasonably justify why this approach is effective in Section 4 and Appendix B.1.\"], \"weaknesses\": \"- The authors state that the simple permutation operation facilitates the distribution of $l_1$ information nodes across all windows (L238-239); however, there is no specific explanation of how the permutation is applied. Could the authors answer to the questions below?\\n 1. Specify exactly which components the permutation is applied to\\n 2. Clarify if the permutation is random or deterministic \\n 3. Provide an ablation study isolating the impact of the permutation compared to the tree structure\\n\\n This would provide valuable insight into how the permutation contributes to the model's performance. \\n- The authors mention that the actual implementation of the tree structure, such as the depth of the tree and the number of child nodes, can be configured to ensure computational efficiency (L199-201). However, no accompanying ablation study has been conducted on these parameters. The authors are required to do below: \\n 1. Conduct experiments varying the tree depth and number of child nodes\\n 2. Show how these choices impact both performance and computational efficiency\\n 3. Discuss how different configurations balance local and global information modeling \\n\\n This would provide concrete evidence for the effectiveness of the hierarchical design.\\n- Additional feedbacks: \\n - It is essential to provide specific details for the clarity. For example:\\n 1. Add SR task specification to Table 6 caption\\n 2. Clarify in main text whether Hi-IR-B or Hi-IR-L is being referred to (Section 5)\\n 3. Add reference to Table 7 when discussing efficiency analysis (L409-L412)\\n 4. Spell out \\\"Dn\\\" as \\\"denoising (Dn)\\\" on first use (Table 7)\\n - There is a typographical error in the sentence (L257) where the first letter should be capitalized. \\\"for each\\\" should be changed to \\\"For each.\\u201d\\n - There is a sentence fragment that lacks a main clause (L375-L376).\\n - Can authors show the GT in Figure 5?\\n - The proposed Hi-IR does not outperform all methods in every instance; therefore, the expression \\\"the proposed Hi-IR outperforms all other comparison methods under both settings\\\" (L474-L475) should be softened.\\n - There appear to be discrepancies between the experimental results in the table and the results described in the text in Section 5.2. Could the authors verify this?\", \"questions\": [\"The reviewer has two questions related to model scaling-up. First, could the authors specify which part of the proposed Hi-IR structure involves replacing heavyweight $3 \\\\times 3$ convolutions with lightweight operations? Second, could the authors explain why applying warming-up and using dot product instead of cosine similarity attention leads to improved scaling-up? Simply showing improved performance with a larger model is insufficient to demonstrate they contribute effective scaling-up convincingly.\", \"In the paragraph discussing the effect of L1 and L2 information flow (L374-L376), could the authors explain why v3 is superior among v1, v2, and v3? Alternatively, could the authors explain the rationale behind the experimental design for v1 to v3? Simply showcasing the best one among the three designs does not provide informative insight for the reader.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response: space and time complexity\", \"comment\": \"Thanks a lot for the suggestion. We compared the space and time complexity, and the effective receptive field of the proposed method with a couple of other self-attention methods including global attention and window attention. Suppose the input feature has the dimension $B \\\\times C \\\\times H \\\\times W$, the window size of window attention is $p$, the number of attention heads is $h$, larger patch size of the proposed L2 information flow is $P=s \\\\times p$, the expansion ratio of the MLP in transformer layer is $\\\\gamma$. For the space complexity, we consider the tensors that have to appear in the memory at the same time, which include the input tensor, the query tensor, the key tensor, the value tensor, and the attention map.\\n\\nThe time complexity of the proposed transformer layer is \\n$\\\\mathcal{O}\\\\left((5+2\\\\gamma)BHWC^2 + \\\\frac{3}{2}BHWp^2C+\\\\frac{3}{2}BHWs^2C+9\\\\gamma BHWC\\\\right)$. The last term is very small compared with the former two terms, and can be omitted. Thus, the time complexity is simplified as $\\\\mathcal{O}\\\\left((5+2\\\\gamma)BHWC^2 + \\\\frac{3}{2}BHWp^2C+\\\\frac{3}{2}BHWs^2C\\\\right)$.\\n\\nThe space complexity of the proposed transformer layer is\\n$\\\\mathcal{O}\\\\left(3BHWC + BHWh\\\\max{(p^2, s^2)}\\\\right)$. The maximum receptive field of two consecutive transformer layers is $16P$.\\n\\nIn the following table, we list the space and time complexity, and receptive field of global attention, window attention, and the proposed method. As shown in this table, window attention is much more efficient than global attenion but with cost of reduced receptive field. The proposed hierarchicial information flow mechanism is more efficient than window attention in propagating information to the global range. As shown in the third row, to achieve the same receptive field as the proposed method, space and time complexity of window attention is much higher than the proposed method. \\n\\n\\n\\n| Attn. Method | Time Complexity | Space Complexity | Max receptive field of two transformer layers |\\n|-------------------------------|-----------------|------------------|-------------------------------------------|\\n| Global Attn. | $\\\\mathcal{O}\\\\left((4+2\\\\gamma)BHWC^2 + {2}B(HW)^2C\\\\right)$ | $\\\\mathcal{O}\\\\left(4BHWC + B(HW)^2h\\\\right)$ | $H \\\\times W$ |\\n| Window Attn. ($p \\\\times p$) | $\\\\mathcal{O}\\\\left((4+2\\\\gamma)BHWC^2 + {2}BHWp^2C\\\\right)$ | $\\\\mathcal{O}\\\\left(4BHWC + BHWhp^2\\\\right)$ | $2p \\\\times 2p$ |\\n| Window Attn. ($8P \\\\times 8P$) | $\\\\mathcal{O}\\\\left((4+2\\\\gamma)BHWC^2 + {128}BHWp^2s^2C\\\\right)$ | $\\\\mathcal{O}\\\\left(4BHWC + 64BHWhp^2s^2\\\\right)$ | $16P \\\\times 16P$ |\\n| The proposed | $\\\\mathcal{O}\\\\left((5+2\\\\gamma)BHWC^2 + \\\\frac{3}{2}BHW(p^2+s^2)C\\\\right)$ | $\\\\mathcal{O}\\\\left(3BHWC + BHWh\\\\max{(p^2, s^2)}\\\\right)$ | $16P \\\\times 16P$ |\"}", "{\"title\": \"Updated response and PDF file\", \"comment\": \"Dear Reviewer [VDnP](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=C6AMGv84Ux),\\n\\nWe have updated our response and the PDF file to provide better reference. As the deadline for the discussion phase is approaching, please feel free to let us know if you have any further questions.\\n\\nThank you very much.\"}", "{\"title\": \"Author Response to Common Questions (Part 1 / 2)\", \"comment\": \"We sincerely appreciate the reviewers' efforts in evaluating our work and their positive feedback on various aspects of our work (**clear motivation, balancing complexity and efficiency, model scaling-up solution, comprehensive experiments, task generalization ability**, *etc*.) We thank the reviewers for their valuable comments and insightful suggestions, which helped us a lot to improve the quality of our paper. Below, we address the common questions raised by the reviewers.\\n\\n### Q1: Details of tree-based information flow attention and permutation. (Reviewer [99w8](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=jnwAz6NPkd) Q1, Reviewer [wAuz](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=2xbtXsJdqy) Q3, Reviewer [VDnP](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=C6AMGv84Ux) Q1) \\n\\n***Ans:*** We provide more details about the the proposed mechanism and permutation operations here. Suppose the input tensor of the L2 information flow is $Y^{l_1}\\\\in \\\\mathbb{R}^{H \\\\times W \\\\times C}$. The permutation operation helps to form the hierachical tree described in the paper. The purpose of L2 information flow is to expand the receptive field beyond a local patch with maintained computational efficiency. Two coupled operations are done to serve this purpose.\\n\\n**First**, as indicated conceptually in Fig. 2(d), $s \\\\times s$ non-overlapping local patches $p \\\\times p$ in L1 information flow are grouped together to form a larger patch with dimension $P \\\\times P$, where $P = sp$. We do not expand to the whole image in this phase due to two considerations: 1) The computational complexity of attention in the global image can be quite high; 2) Not all global image information is relevant to the reconstruction of a specific pixel. \\n\\n**Second**, after the grouping operation, self-attention is applied to pixels located within the larger patch $P \\\\times P$ but distributed across the small patches $p \\\\times p$. To faciliate the self-attention, the $s\\\\times s$ dispersed pixels need to be grouped together via a permutation operation.\\n\\n**In short**, the seemingly complicated operation can be done easily by a reshape and a permutation operation. The input tensor is first reshaped to $\\\\hat{Y}^{l_1}\\\\in \\\\mathbb{R}^{\\\\frac{H}{P} \\\\times s \\\\times p \\\\times \\\\frac{W}{P} \\\\times s \\\\times p \\\\times C}$. Then a permuation is done to form $(Y')^{l_1}\\\\in \\\\mathbb{R}^{(\\\\frac{H}{P} \\\\times \\\\frac{W}{P} \\\\times p^2) \\\\times s^2 \\\\times C}$. L1 information flow attention is done within the $p\\\\times p$ patch while L2 information flow attention is conducted among the dispersed pixel locations in the partitioned larger $P \\\\times P$ regions. We also made this clearer in the revised version of the paper.\"}", "{\"title\": \"Discussion phase\", \"comment\": \"Dear Reviewer,\\n\\nAs the deadline for the discussion phase will end soon (Nov 26), please let us know whether we have addressed all the questions.\\n\\nThank you,\"}", "{\"comment\": \"### Q2: Novelty of the paper. (Reviewer [wAuz](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=2xbtXsJdqy) Q1, Reviewer [VDnP](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=C6AMGv84Ux) Q2)\\n\\n***Ans:*** The novelty of the paper comes with the follwing aspects.\\n1. We propose a hierachical information flow mechanism to efficiently and progressively propagate information from local to the global field. This is different from previous works such as SwinIR[1], Uformer[2], Restormer[3], ShuffleFormer[4], Shuffle Transformer[5], etc. \\n - The proposed L2 information flow attention facilitates the propagation of information between dispersed pixels within an enlarged region and at an elevated level. Specifically, we avoid expanding to the entire image at this stage for two reasons: 1) the computational complexity of attention across the global image is prohibitively high; and 2) not all global image information is relevant to the reconstruction of a specific pixel. Compared with SwinIR[1], the receptive field is larger. Compared with ShuffleFormer[4] and Shuffle Transformer[5], this mechanism promotes pixel relevance.\\n - The main contribution of Uformer[2] is validating the UNet architecture for Transformers while the main contribution of Restormer[3] is the self-attention along the channel dimension. Both of them are different from the proposed hierarchical information flow in this paper.\\n - Although we use Columnar architecture for SR and UNet architecture for the other tasks, we do not claim the general architecture as the contribution of this paper. Those are standard choices following the literature.\\n2. We conduct thorough experiments and analysis to study model scaling-up for image restoration. We propose three strategies for IR mdoel scaling-up including removing heavyweight $3\\\\times 3$ convolution, warmup, and using dot production for self-attention. Both experimental results and theoretical analysis (Appendix B) is done. In particular:\\n - Removing heavyweight $3\\\\times 3$ convolution from the network avoids intialized weight parameters with small values which leads to vanishing gradients and slow convergence.\\n - Warmup helps because it mitigates issues related to unstable gradients in the early phase of training and helps the optimizer gradually adapt to the model\\u2019s large parameter space.\\n - Dot product attention works better than cosine similarity attention becase the gradient of cosine similarity is more prone to producing large or even unstable values.\\n3. We conducted experiments on various image restoration problems including image super-resolution, denoising, motion deblurring, defocus debluring, removing rain, fog, haze, and snow from the image. The thorough analysis validated the generalizability of the proposed method.\\n\\n[1] Liang, Jingyun, et al. \\\"Swinir: Image restoration using swin transformer.\\\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\\n\\n[2] Wang, Zhendong, et al. \\\"Uformer: A general u-shaped transformer for image restoration.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\\n\\n[3] Zamir, Syed Waqas, et al. \\\"Restormer: Efficient transformer for high-resolution image restoration.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\\n\\n[4] Xiao, Jie, et al. \\\"Random shuffle transformer for image restoration.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[5] Huang, Zilong, et al. \\\"Shuffle transformer: Rethinking spatial shuffle for vision transformer.\\\" arXiv preprint arXiv:2106.03650 (2021).\", \"title\": \"Author Response to Common Questions (Part 2 / 2)\"}", "{\"title\": \"Updated response and PDF file\", \"comment\": \"Dear Reviewer [wAuz](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=2xbtXsJdqy),\\n\\nWe have updated our response and the PDF file to provide better reference. As the deadline for the discussion phase is approaching, please feel free to let us know if you have any further questions.\\n\\nThank you very much.\"}", "{\"comment\": \"Thank you for your detailed responses, which have partially addressed my concerns.\\n\\nWhile the authors have provided comparisons of efficiency metrics such as FLOPs, runtime, and parameters. I believe that for a more comprehensive analysis of the complexity associated with hierarchical information flow, it is essential to include a theoretical perspective on both time and space complexity. \\n\\nI have decided to maintain my rating.\"}", "{\"title\": \"Please let us know if you have additional questions\", \"comment\": \"Dear Reviewer [VDnP](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=C6AMGv84Ux),\\n\\nThank you for the comments on our paper.\\n\\nWe have provided a response and a revised paper on Openreview based on the comments. Since the discussion phase ends on Nov. 26, we would like to know whether we have addressed all the issues. Please consider raising the scores after this discussion phase.\\n\\nThank you\"}", "{\"metareview\": \"The paper proposes a hierarchical information flow mechanism for image restoration. However, all the reviewers believe that the novelty of the proposed information flow is relatively limited, which falls short of the acceptance standards of ICLR. Besides, the paper lacks a detailed discussion and comparison with other self-attention mechanisms.\\nBased on the reviewer's average rating, I believe that this paper is not yet ready for publication at ICLR.\", \"additional_comments_on_reviewer_discussion\": \"All the reviewers participated in the discussion, but none of them changed their scores. The average score of the paper remains below the acceptance threshold.\"}", "{\"comment\": \"Thank you for the authors' responses. After carefully reading the responses and the revised version, I believe that the proposed hierarchical information flow is a variant of the self-attention mechanism which performs non-local operations to aggregate similar patches. It should be discussed and compared with the difference between the proposed method and other self-attention mechanisms. In addition, the proposed method achieves slight performance improvement on the benchmark datasets of several tasks, such as single image super-resolution, and denoising. Therefore, I would like to maintain my original score and rating.\"}", "{\"title\": \"Discussion phase\", \"comment\": \"Dear Reviewer,\\n\\nAs the deadline for the discussion phase will end soon (Nov 26), please let us know whether we have addressed all the questions.\\n\\nThank you,\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Author Rebuttal (Part 1 / 2)\", \"comment\": \"### Q1: Comparison with ShuffleFormer and Shuffle Transformer.\\n***Ans:*** Thanks a lot for the comments. We compare with Random shuffle transformer [1] and Shuffle transformer [2]. The following comparson is added to Appendix D in the revised paper. Both methods use spatial shuffle operations to facilitate non-local information exchange, with one being random and the other deterministic. \\n\\n1. Random Shuffle Transformer (ShuffleFormer) [1] applies random shuffling on the spatial dimension, which increases the probability of global information existing within a local window. While this operation extends the receptive field globally in a single step, it compromises the relevance of pixels within the window. In contrast, the hierarchical information flow proposed in this paper progressively propagates information from local to global while preserving the relevance of attended pixels. A comparison with ShuffleFormer on image deblurring is presented in the following table. Hi-IR outperforms ShuffleFormer by a significant margin while using 55.5% fewer parameters. This demonstrates the effectiveness of the hierarchical information flow method introduced in this work. The comparson is added to Table 11 of the main paper.\\n\\n| Method | Model Size | GoPro PSNR | HIDE PSNR |\\n|---------------|------------|------------|-----------|\\n| ShuffleFormer | 50.61M | 33.38 | 31.25 |\\n| Hi-IR | 22.33M | 33.99 | 31.64 |\\n\\n2. Shuffle Transformer employs a spatial shuffle operation to aggregate information from distant pixels or tokens. However, it differs from the proposed Hi-IR in several key aspects. First, Shuffle Transformer does not enable progressive information propagation within a hierarchical tree structure. Second, its shuffle operation is based on a fixed grid size of $g = 8$. The distance between pixels in the shuffled window is $H/g$ and $W/g$ along the two axes, which directly depends on the image size. For large images (e.g., 1024 pixels), this design forces distant pixels to attend to one another, often introducing irrelevant information. Consequently, this operation is unsuitable for image restoration tasks, where image sizes can become extremely large. In contrast, the L2 information flow attention proposed in this paper limits the maximum patch size, thereby constraining the maximum distance between pixels at this stage. This restriction enhances the relevance of pixel interactions, making it more effective for image restoration tasks.\\n\\n[1] Xiao, Jie, et al. \\\"Random shuffle transformer for image restoration.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[2] Huang, Zilong, et al. \\\"Shuffle transformer: Rethinking spatial shuffle for vision transformer.\\\" arXiv preprint arXiv:2106.03650 (2021).\\n\\n### Q2: Typos\\n***Ans:*** Thanks a lot for pointing out the typos. We have corrected the typos in the revised version.\\n### Q3: Different datasets and scale factors in Tables 3, 4, 5, 6, and 7\\n***Ans:*** Thank you for highlighting the uniformity of the datasets and scaling factors in the ablation study. We provide the following clarifications.\\n\\n1. Different test sets in Tables 3, 4, 5, 6, and 7: The ablation study was conducted on super-resolution (SR) tasks, utilizing five test sets: Set5, Set14, BSD100, Urban100, and Manga109. The model was trained once and evaluated on all datasets. However, due to space constraints, we could not display all test results. Notably, the performance gap between different realizations or training methods is more pronounced on Urban100 and Manga109 compared to Set5, Set14, and BSD100. This discrepancy reflects the characteristics of the datasets rather than the methods themselves, as the results are consistent across datasets. To enhance the diversity of reported results, we primarily presented results on the representative dataset Set5 in Tables 4 and 6, Urban100 in Table 7, and across all test sets in Tables 3 and 5. \\n\\n2. Different scaling factors in Table 3 and Table 5\\nWe indeed conducted experiments for all scaling factors in both ablation studies. However, due to space constraints in the main paper, some results were omitted. The amended results are presented below. We also append the full table in Appendix B.\"}", "{\"summary\": \"This paper proposes an efficient image restoration (IR) method called Hi-IR. Specifically, it introduces a hierarchical information flow that efficiently aggregates global context for each pixel. The authors also scale up the network and offer insights into training larger models. Extensive experiments across seven tasks validate the effectiveness and generalization capability of Hi-IR.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The experimental results on seven IR tasks validate the strong task generalization ability of the proposed method. The ablation studies also provide evidence of the effectiveness of individual modules.\\n\\n2. The exploration of scaling IR models may provide practical insights for future research. \\n\\n3. The writing is clear and easy to follow.\", \"weaknesses\": \"Major:\\n1. The details of the permutation operation in Section 3.2 are not fully explained. Various types of permutations could be considered, such as random and circular permutations. I believe this operation is the main technical difference from the window-shift operation used in SwinIR [1]. Additional information on the permutation approach would enhance clarity.\\n\\n2.From my perspective, the core concept: hierarchical information flow (HIF for short) is essentially a window attention [1] without shifting (L1 information) and a new cross-window interaction operation (L2 information, i.e. permute then MSA). Therefore, the technical novelty is limited.The author may give more discussion on the other realizations of HIF.\\n\\n3.I believe a complexity analysis of the proposed HIF is beneficial to show the efficiency advantage of Hi-IR. On the other hand, it will provide more insights on how to choose better realizations of HIF (Line 078).\", \"minor\": \"1. In the second paragraph of Related Work, the authors introduce several attention mechanisms for IR (Line 131-133), whereas the corresponding reference is missing. \\n\\n2. The spacing between some captions and corresponding figures/tables (e.g. Figure 5, Table 15) is not well set.\\n\\n3. Some typos (e.g. Line 016 remove propagation, Line 244 change Fig.3(c) to Fig.3(b)).\\n\\n[1] Swinir: Image restoration using swin transformer. ICCVW21.\", \"questions\": \"1. I\\u2019m curious about the model's performance when scaled to a larger size (e.g., ~100M parameters). Is the proposed scaling method effective for models of this size?\\n\\n2. Figure 2 is somewhat unclear. Could you clarify what the different colors represent?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion phase\", \"comment\": \"Dear Reviewer,\\n\\nAs the deadline for the discussion phase will end soon (Nov 26), please let us know whether we have addressed all the questions.\\n\\nThank you,\"}", "{\"title\": \"Please let us know if you have additional questions\", \"comment\": \"Dear Reviewer [99w8](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=jnwAz6NPkd),\\n\\nThank you for the comments on our paper. \\n\\nWe have provided a response and a revised paper on Openreview based on the comments. Since the discussion phase ends on Nov. 26, we would like to know whether we have addressed all the issues. Please consider raising the scores after this discussion\\u00a0phase.\\n\\nThank you\"}", "{\"summary\": \"This paper proposes a hierarchical information flow principle for general image restoration tasks, which aims to address three significant problems in image restoration including a generalized and efficient IR model, model scaling, and the performance of a single model on different IR tasks. The paper comprehensively analyzes different configurations of model scaling and provides sufficient evaluation results on different image restoration tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper clearly describes the motivations of the proposed method, and the structure of the paper is well-organized, and easy to follow.\\n\\n2. The paper provides comprehensive experiments to evaluate the performance of different model scaling configurations, which are convincing.\\n\\n3. The paper demonstrates that the proposed hierarchical information flow design is effective for performance improvement in different IR tasks.\", \"weaknesses\": \"1. The proposed method appears to have limited novelty. First, it adopts the U-Net structure, whose efficiency has already been evaluated in works like Uformer [1] and Restormer [2]. Second, the hierarchical information flow design seems to be another variation of existing efficient self-attention mechanisms.\\n\\n2. The claim that generalization and efficiency are inherently a trade-off may not be entirely accurate. Why would a model with strong generalization capabilities be considered inefficient? Is there any external reference or evidence to support this claim?\\n\\n3. The paper does not clearly explain the details of the tree-based self-attention mechanism. How is the tree structure specifically applied within the self-attention mechanism? Please provide more details or examples to clarify this.\\n\\n4. The paper claims that the proposed method can handle images with various degradations, but the experiments mainly show its effectiveness on individual IR tasks. Since real-world images often have multiple types of degradations, how does the proposed method perform when applied to such images?\\n\\n[1] Wang, Zhendong, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. \\\"Uformer: A general u-shaped transformer for image restoration.\\\" In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 17683-17693. 2022.\\n\\n[2] Zamir, Syed Waqas, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. \\\"Restormer: Efficient transformer for high-resolution image restoration.\\\" In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5728-5739. 2022.\", \"questions\": \"1. It is better to clarify what distinguishes your approach from these prior works.\\n\\n2. Why would a model with strong generalization capabilities be considered inefficient? Is there any external reference or evidence to support this claim?\\n\\n3. How is the tree structure specifically applied within the self-attention mechanism? Please provide more details or examples to clarify this.\\n\\n4. How does the proposed method perform when applied to real-world images for image super-resolution and image denoising? More experiment results should be included.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Please let us know if you have additional questions\", \"comment\": \"Dear Reviewer [wAuz](https://openreview.net/forum?id=C0Ubo0XBPn&noteId=2xbtXsJdqy),\\n\\nThank you for the comments on our paper.\\n\\nWe have provided a response and a revised paper on Openreview based on the comments. Since the discussion phase ends on Nov. 26, we would like to know whether we have addressed all the issues. Please consider raising the scores after this discussion phase.\\n\\nThank you\"}" ] }
C0HDYvGwol
3D-Adapter: Geometry-Consistent Multi-View Diffusion for High-Quality 3D Generation
[ "Hansheng Chen", "Bokui Shen", "Yulin Liu", "Ruoxi Shi", "Linqi Zhou", "Connor Z. Lin", "Jiayuan Gu", "Hao Su", "Gordon Wetzstein", "Leonidas Guibas" ]
Multi-view image diffusion models have significantly advanced open-domain 3D object generation. However, most existing models rely on 2D network architectures that lack inherent 3D biases, resulting in compromised geometric consistency. To address this challenge, we introduce 3D-Adapter, a plug-in module designed to infuse 3D geometry awareness into pretrained image diffusion models. Central to our approach is the idea of 3D feedback augmentation: for each denoising step in the sampling loop, 3D-Adapter decodes intermediate multi-view features into a coherent 3D representation, then re-encodes the rendered RGBD views to augment the pretrained base model through feature addition. We study two variants of 3D-Adapter: a fast feed-forward version based on Gaussian splatting and a versatile training-free version utilizing neural fields and meshes. Our extensive experiments demonstrate that 3D-Adapter not only greatly enhances the geometry quality of text-to-multi-view models such as Instant3D and Zero123++, but also enables high-quality 3D generation using the plain text-to-image Stable Diffusion. Furthermore, we showcase the broad application potential of 3D-Adapter by presenting high quality results in text-to-3D, image-to-3D, text-to-texture, and text-to-avatar tasks. Code will be made publicly available.
[ "3D generation", "multi-view", "diffusion models", "texture generation", "radiance fields", "gaussian splatting" ]
Reject
https://openreview.net/pdf?id=C0HDYvGwol
https://openreview.net/forum?id=C0HDYvGwol
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zkMbp3If1b", "v9LkPttN5r", "pIFufJUuzW", "nuOZkp93s5", "mvsCN1gIVW", "lLw1nN9acW", "l6wYkpJ0Jg", "k4qVnTkW1R", "hwpIeCtaTC", "fCLiSMY0gz", "eMGgoJecgp", "XCgVnNVHq4", "X0fB5vVZVC", "Wjs9L6M1yC", "WHMLPw9c2h", "TuaDS6ZyfM", "TaiscDcQiT", "T2UAmhrxse", "Q1VPI0WOs2", "Ltdf8ycxbR", "LqVcZOYAgD", "KMHJUVGF0X", "IkJuHJfAzX", "I1TTWWZAfA", "DoZtLjjt1K", "A20u7BeMqM", "4XVhHfNE9t", "2rWeOZyiA0", "0DlNdCHWph" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732345675085, 1732525924185, 1732343736171, 1732346789931, 1732347056913, 1737523881028, 1730702500958, 1732753969136, 1732746921869, 1732348130913, 1732745454864, 1732733378850, 1732341773959, 1732340880898, 1732593151372, 1734855158105, 1730677699808, 1732511276748, 1732782583150, 1730694885958, 1732736951801, 1729874609805, 1732745076302, 1731143097676, 1732686286074, 1732342317581, 1732345247840, 1732742120685, 1732347793409 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8003/Reviewer_RiGq" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Reviewer_dfQt" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Reviewer_kMh5" ], [ "ICLR.cc/2025/Conference/Submission8003/Area_Chair_dYeV" ], [ "ICLR.cc/2025/Conference/Submission8003/Reviewer_dfQt" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Reviewer_kMh5" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Reviewer_TS5P" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Reviewer_95MW" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ], [ "ICLR.cc/2025/Conference/Submission8003/Reviewer_dfQt" ], [ "ICLR.cc/2025/Conference/Submission8003/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Comments on the questions (part 2):\\n\\n> **The visuals in column 5 appear broken and clearly worse than those in, for example, LGM, yet the CLIP score is significantly higher. Please double-check these evaluations.**\\n\\nColumn 5 shows the two-stage results from the original GRM. While some results may appear broken, more outputs are included in the appendix for reference. We observed that the CLIP score is not sensitive to broken geometry, which is why we rely on the MDD score for evaluating geometry quality. Our evaluation results in Table 2 are valid and align well with the original GRM paper.\\n\\n> **Could you justify the use of ControlNet? How does it compare to using pure latent fusion?**\\n\\nThank you for the question. As noted above, we use ControlNet to preserve the original model topology. Since ControlNet uses zero-initialized weights, its impact to the base model is minimal, allowing 3D-awareness to be introduced without degrading quality.\\n\\nLatent fusion has been tested in our added dynamic I/O sync baseline, which performs dynamic fusion between the UNet outputs and the rendered outputs in the latent space. While this improves upon vanilla I/O sync, the quality remains below the 2-stage baseline and 3D-Adapter. Details are provided in the revised Appendix A.2.\\n\\n> **It's quite complex and specific training process, it would be great to hear about the reasoning/ablations behind these numbers.**\\n\\nThe reconstruction method is not part of our main contributions, and we do not claim it to be optimal, though we tuned it extensively. More details are in the Appendix.\\n\\nThe purpose of this section is to provide a consistent platform for testing the optimization-based 3D-Adapter and comparing it to 2-stage and I/O sync baselines. The text-to-avatar task effectively demonstrates 3D-Adapter's advantages over these baselines when using the same reconstructor.\"}", "{\"comment\": \"Some additional comments on the I/O sync baseline:\\n\\nI/O sync itself is not typically the first choice for 3D generation tasks. For example, in text-to-3D and image-to-3D tasks, most state-of-the-art methods (e.g., GRM, CRM, InstantMesh) employ **two-stage** approaches. I/O sync is more commonly used in texture generation tasks (e.g., SyncMVD, TexPainter), where our own I/O sync implementation outperforms others. Furthermore, its drawbacks are thoroughly analyzed both theoretically in Appendix A.1 (where the linearity assumption holds due to linear texture blending) and empirically, as demonstrated in the experiments in Table 5.\\n\\nIf you find our evidence insufficient, please be specific on the aspects that need improvement so we can better address them.\"}", "{\"comment\": \"Thank you for the comments. We have uploaded a revised PDF and the following are comments on the weaknesses.\\n\\n> **The quality improvement in the main text-to-3D task is 1) marginal and 2) visuials are not convincing enough to justify complicating the training pipeline. For instance, in Fig. 1, image-to-3D visuals are comparable b/w 3D-Adapter and \\\"two-stage pipeline\\\".**\\n\\nWe revised Fig. 3 to highlight the differences between 3D-Adapter and 2-stage results. Across all 3D generation comparisons, 2-stage consistently shows more floaters and fuzzy geometries, which are fixed by 3D-Adapter. This is also reflected in the MDD metric in Table 1. In Fig. 8, we also show the differences of intermediate results. \\n\\nThe goal of 3D-Adapter is to improve geometry consistency without sacrificing visual quality, effectively removing floaters and texture seams. Ideally, 3D-Adapter should reproduce the base model's appearance unless inconsistencies arise. **We do not expect 3D-Adapter to produce completely new results**, which should be the job of the base diffusion model and the reconstruction method.\\n\\n> **The paper lacks significant novelty, as the use of 3D representations for synchronization has been explored before (e.g. NerfDiff). The main contribution appears to be the placement of the adapter in the parallel branch rather than at the input/output stage of the diffusion UN et. However, this seems more like a technical choice, as is the decision to train Control Net on intermediate outputs.**\", \"edited\": \"We have clarified the differences between previous synchronization methods (I/O sync) and our 3D-Adapter. Our approach places 3D reconstruction in a parallel branch, preserving the base model\\u2019s topology. In the revised introduction, we clarified the error accumulation problem of I/O sync:\\n\\n> Diffusion model sampling is sensitive to error accumulations (Li & van der Schaar, 2024). I/O sync methods insert 3D reconstruction and rendering operations into the denoiser in a way that disrupts the original model topology and introduces errors during each denoising step (unless reconstruction and rendering are perfect).\\n\\nOur approach, while simple, proves effective, as shown in our experiments. I/O sync often produces blurry results, whereas 3D-Adapter consistently demonstrates better visual quality.\\n\\nControlNet is an intuitive choice for our method because its zero initialization is designed to minimize the impact to the base model, and the base model\\u2019s topology remains intact. Additionally, it inherits the base model\\u2019s knowledge, requires minimal fine-tuning. Other technical choices are of course possible (e.g., we have tested T2I Adapter, which performed worse than ControlNet), especially considering the other base models such as DiTs. Nevertheless, we believe our core contribution\\u2014the parallel 3D branch\\u2014is a critical innovation.\\n\\n> **Some visuals (e.g. Fig. 3, column 5) raise concerns about the correctness of the implementation.**\\n\\nWe have double checked our implementation of I/O sync and found no problems. In fact, our I/O sync baseline is exceptionally strong on the text-to-texture benchmark; for text-to-3D, its MDD metric is also the best overall. Regarding the bad visual appearance, we added the explanation in the revised Appendix A.2 (edited):\\n\\n> While I/O sync works reasonably on our texture generation benchmark, our text-to-3D model using I/O sync (A2 in Table 1 and Fig. 3) exhibits significant quality degradation due to mode collapse. We believe the main reasons are twofold. First, the base model Instant3D generates a very sparse set of only four views, which are hard to synchronize. Second, our fine-tuned GRM reconstructor is trained using the depth loss to suppress surface fuzziness, which has a negative impact when its sharp renderings $\\\\tilde{\\\\mathbf{x}}_t$ are used as diffusion output. This is because a well-trained diffusion model should actually predict blurry outputs $\\\\hat{\\\\mathbf{x}}_t$ in the early denoising stage as the mean of the distribution $p(\\\\mathbf{x}_0|\\\\mathbf{x}_t)$. Only in the late stage should $\\\\hat{\\\\mathbf{x}}_t$ be sharp and crisp, as shown in Fig. 8.\\n\\nIn light of this, in the revised PDF, we added a stronger I/O sync baseline (A3) with the dynamic blending technique:\\n\\n> As shown in Table 1 and Fig. 3, dynamic I/O sync demonstrates significant improvements in visual quality over vanilla I/O sync.\", \"edit\": \"Please also note that I/O sync itself is not typically the first choice for 3D generation tasks. For example, in text-to-3D and image-to-3D tasks, most state-of-the-art methods (e.g., GRM, CRM, InstantMesh) employ two-stage approaches. I/O sync is more commonly used in texture generation tasks (e.g., SyncMVD, TexPainter), where our own I/O sync implementation outperforms others. Here, its drawbacks are thoroughly analyzed both theoretically in Appendix A.1 (where the linearity assumption holds due to linear texture blending) and empirically, as demonstrated in the experiments in Table 5.\"}", "{\"comment\": \"Thank you for the comments. We have uploaded a revised PDF and the following are comments in response to the weaknesses.\\n\\n> **Lack of information on computation and memory overhead for competing methods**\\n\\nOur evaluation focuses on comparing 3D-Adapter with 2-stage and I/O sync methods. Detailed runtime analysis is provided in Table 7 (Appendix) and briefly mentioned in Section 5.2. Table 7 shows that 3D-Adapter's per-step time is 0.707 sec, with 0.531 sec spent on added modules. The remaining 0.176 sec is the time for 2-stage method (excluding the second reconstruction stage). Results for the Zero123++ model are similar.\", \"edit\": \"EpiDiff is an image-to-3D model. We have tried the official code, but our 48GB A6000 GPU failed to run the inference, as the author suggested using an 80GB GPU. The core contribution of EpiDiff is improving the network architecture using epipolar attention, while our work focus on leveraging explicit 3D reconstruction and rendering, which is on a different track to EpiDiff.\\n\\n> **Confusing notation of the proposed method in the tables**\\n\\nMany other methods are also combinations of existing multi-view diffusion models and reconstruction methods without explicitly naming them. Detailing all such methods equally would be too verbose given the space constraints.\"}", "{\"comment\": \"Comments on the questions:\\n\\n> **Have any similar results been discussed previously in different domains? It would be convincing to add references to discuss consistency across different works in different applications.**\\n\\nTo our knowledge, few application-focused papers explore the theory behind linear combinations of score functions. We have added a citation to a concurrent work (Bradley & Nakkiran, Classifier-Free Guidance is a Predictor-Corrector), which examines similar phenomena in a different context.\\n\\n> **How much training data should be used to train GRM? As stated in the training details of Sec. 4.1, the authors fine-tuned 2000 iterations with 16 objects in a single batch, i.e., 32000 objects. Considering that a single object yields many views, it seems to be a lot for fine-tuning. What is the minimal training data to make the proposed method work? This question can also be rephrased as why the authors picked 2000 and 4000 iterations for the Instant3D and Zero123++ cases, respectively.**\\n\\nThe iteration numbers are set to ensure GRM performance saturates under the given schedules. GRM for Zero123++ requires more iterations because the released GRM only supports 4 views with a white background, while Zero123++ generates 6 views with a gray background, creating a larger domain gap. Using more iterations helps address this gap. Dataset sizes are detailed in Section 4.1: 47k for Instant3D and 80k for Zero123++. While we have not ablated dataset size, these numbers should have exceeded the minimal requirements.\\n\\n> **Extending analysis: The authors showed why I/O sync may be bad, which is interesting. Then, another question naturally arises: why ControlNet-like feature addition (feedback augmentation) used in this work is effective? It is more interesting because the proposed method also does not guarantee a reduction of the gap in Eq. (7).**\\n\\nThank you for the question. For 3D-Adapters with finetuned ControlNets, the diffusion loss forces the output $\\\\hat{\\\\mathbf{x}}_t$ to match the mean of the distribution $p(\\\\mathbf{x}_0|\\\\mathbf{x}_t)$, where $p(\\\\mathbf{x}_0|\\\\mathbf{x}_t)$ is a joint probability of multiple views. Without 3D feedback augmentation and ControlNet, it is more difficult for the model to learn the multi-view correlations in $p(\\\\mathbf{x}_0|\\\\mathbf{x}_t)$, while a 3D-aware branch makes it easier to do so. For 3D-Adapter with off-the-shelf tile and depth ControlNets, we empirically observe good results (since the tile ControlNet is robust to blurry conditions), although the accuracy has no perfect guarantee.\\n\\n> **Lines 286-288: The authors mentioned two ControlNets for superresolution and depth. Does it mean the authors use both ControlNet or is the depth ControlNet replaced with the super-resolution ControlNet? Then, how can we feed depth rendering to the superresolution ControlNet encoder without training?**\\n\\nWe use both ControlNets. The tile superresolution ControlNet takes the rendered RGB as input, while the depth ControlNet takes the rendered depth as input.\\n\\n> **Line 287: Please provide the URL for the superresolution Control Net as a footnote to give proper credit and to enhance the reproducibility.**\\n\\nThe tile ControlNet is a part of the official ControlNet repository from the ControlNet authors.\\n\\n> **Line 485 - \\\"texture field optimization\\\": What is texture field optimization referred to? Is the texture field optimization applicable to the other competing methods, TEXTure, Text2Tex, and SyncMVD, in Table 6? Was it applied to them already?**\\n\\nAfter the final denoising step, occlusion may leave some object surfaces untextured when using texture backprojection. To address this, we use InstantNGP as a volumetric RGB representation, enabling smooth interpolation for unseen regions by optimizing it to match the final denoised views. TexPainter already employs this approach and SyncMVD could potentially adopt it. We don\\u2019t think this isthe primary factor affecting quality (TexPainter is clearly worse than SyncMVD and our I/O sync baseline due to other implementation reasons).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper suggests a novel approach by introducing the 3D feedback augmentation adapter. It presents methods for applying this adapter to both feed-forward 3D generation and pretrained 2D models. The design incorporates a comprehensive 3D rendering process and a fine-tuning stage utilizing ControlNet to enhance the model's 3D awareness and multi-view consistency. Through experiments, the paper demonstrates the superior performance of the 3D-Adapter by applying it to various tasks and diverse base models, showcasing its effectiveness and versatility across different applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea behind the 3D-Adapter is conceptually novel. It incorporates a lightweight 3D rendering pipeline within the diffusion sampling process through a mechanism that balances computational efficiency and multi-view consistency.\\n\\n\\n2. While there may be limitations in terms of generalizability, the 3D-Adapter has demonstrated significant versatility through its application across various tasks and base models, as shown through comprehensive experiments.\", \"weaknesses\": \"1. Clarity of exposition: A more comprehensive explanation of the relationship between GRM and models such as Instant3D and Zero123++ would greatly enhance the reader's understanding. It is particularly important to elucidate whether GRM builds 3D representations from the views generated by Instant3D and Zero123++ and whether fine-tuning of GRM is involved in this process.\\n\\n2, Limitations in the scalability of 3D-Adapter: \\n- According to Equation 2, it appears that the 3D-Adapter could potentially be applied to other I/O sync-based pretrained models. However, it is necessary to discuss whether incorporating ControlNet into I/O sync pretrained models beyond GRM would yield similar results, and why Training Phase 1 is essential. If Training Phase 1 is a crucial step for optimizing the performance of 3D-Adapter, including GRM, it should be examined whether this phase is equally necessary when applied to other models.\\n\\n- Despite utilizing various 3D representation techniques and loss functions, such as NeRF and DMTet, ensuring global semantic consistency remains challenging. This inherent limitation highlights the need for additional conditions to achieve robust 3D generation. Consequently, this raises concerns about the 3D consistency of 3D-Adapter in more general text-to-3D or image-to-3D tasks outside of specialized domains, such as avatar generation.\", \"questions\": [\"The role of augmentation guidance appears to be significant. Could you provide visual evaluations in addition to Table 1 to better illustrate this effect?\", \"Please include a dedicated pipeline figure specifically for the process described in Section 4.2 (line 289) to enhance clarity.\", \"What are the results of using the optimization-based 3D-Adapter without additional conditioning?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary of Revisions\", \"comment\": [\"**Some of our comments have been edited to reflect the latest revisions. Please refer to the latest comments on OpenReview instead of the email correspondence. Thank you for your understanding!**\", \"We sincerely thank all the reviewers for their comments and discussions. We understand that the content of this paper is dense, and we greatly appreciate the reviewers\\u2019 efforts in carefully reading the manuscript and rebuttal and providing constructive suggestions for improvement.\", \"We have thoroughly addressed all concerns and uploaded a fully revised manuscript. Below is a summary of the major changes:\", \"Improved Writing and Clarity:\", \"Revised half of the introduction to better summarize this work and highlight the contributions.\", \"Refined our claims about the I/O sync baseline and provided more in-depth analysis in Appendix A.\", \"Revised Section 4.2 for improved clarity.\", \"Updated Fig. 2 by adding VAE encoding and decoding blocks for better clarity. Additionally, Fig. 9 was added to illustrate the optimization-based 3D-Adapter.\", \"References:\", \"Incorporated and discussed all suggested references, including concurrent work.\", \"New Comparisons:\", \"Added comparisons with InstantMesh, CRM, TexPainter, and 3DTopia. Updated corresponding figures.\", \"**Updated**: Comparison with EpiDiff is also added now, as requested by Reviewer dfQt\", \"New Experiment Results:\", \"Table 1, Figure 3: (A3) Dynamic I/O sync\", \"Table 5, Figure 5: 3D-Adapter + I/O sync\", \"Figure 8: Visualization of the intermediate results\", \"Figure 10: Qualitative results of Table 1 B0-C1\", \"Training Details:\", \"Added VRAM usage and training hours in Section 4.1.\", \"Inference Times:\", \"Reported inference times for all SOTA comparisons.\", \"Limitations:\", \"Acknowledged that 3D-Adapter's text-to-texture pipeline does not disentangle texture from lighting.\"]}", "{\"comment\": \"Update: The limitations on texture-lighting disentanglement and PBR texture references have been added.\"}", "{\"comment\": \"Comments on the questions:\\n\\n> **Could the authors elaborate on how 3D-Adapter's 3D feedback augmentation differs fundamentally from existing I/O sync techniques?**\\n\\nAs stated in the paper,\\n\\n> We broadly define I/O sync as inserting a 3D representation and a render-ing/projecting operation at the input or output end of the denoising network to synchronize multiple views.\\n\\nTypical output synchronization models use rendered views as outputs for the current denoising step, fed into the diffusion solver. These models often face error accumulation (unless reconstruction and rendering are perfect) or mode collapse (discussed in Appendix A.1). For example, DMV3D is a native I/O sync model, it does not suffer from mode collapse because it is trained to predict $\\\\hat{\\\\mathbf{x}}_t$ as the mean of the distribution $p(\\\\mathbf{x}_0|\\\\mathbf{x}_t)$, where $p(\\\\mathbf{x}_0|\\\\mathbf{x}_t)$ is a joint probability of multiple views. However, it still suffers from error accumulation in late denoising stages (confirmed with the DMV3D authors). On the other hand, SyncMVD, an adapted I/O sync model using Stable Diffusion for texture generation, lacks domain-specific training and suffers from mode collapse due to score averaging.\\n\\nWhat makes 3D-Adapter *unique* is its ability to introduce 3D-awareness while keeping the base model's topology intact. By leveraging ControlNet\\u2019s zero-initialized weights, 3D-Adapter integrates 3D priors with minimal impact on the base model, achieving enhanced performance without quality degradation.\\n\\n> **Additionally, it is recommended that the authors consider showing intermediate results during the denoising process, such as outputs from the 3D reconstruction model at various stages. This could help readers better understand the contributions of the proposed method.**\\n\\nThank you for the suggestion. We have added visualizations of the intermediate results in the revised PDF (Fig. 8 in the Appendix).\"}", "{\"comment\": \"We understand your concerns about the PBR texture comparison. We will address this as a limitation in Section 5.4 in our next update.\\n\\nRegarding [Bradley & Nakkiran], first of all, it is an **concurrent** work. Secondly, our appendix was written in April 2024, way before the release of [Bradley & Nakkiran] (Aug 2024). We have already added that reference in the revised version, but we do not understand why it is **necessary** to cite a concurrent work which focuses on CFG instead of 3D generation.\\n\\nWe acknowledge that the initial submission has a lot of missing references, which have been added in the revised version.\"}", "{\"comment\": \"Thank you for preparing the responses and the revision.\\nSome of the responses clarify my questions and concerns. \\nThis reviewer thinks that this work at least contains valuable content. \\nHowever, this reviewer is unsatisfactory to some others of the responses and feels a bit hand-wavy (some of them are specified below). \\n\\n- > We may consider adding more baselines before the final revision, but this is time-intensive and not our priority. The paper's goal is not to compare system-level SOTAs. Instead, we aim to demonstrate improvements over 2-stage and I/O sync using the same base diffusion model and reconstruction method,\\n\\n Disagree. [C1] is a very closely related method about a way to impose multi-view consistency, which is exactly the same focus of this submission. Thus, mentioning that it is not the authors' priority is irresponsible. \\n\\n- > DreamMat, Paint-it, Paint3D focuses on PBR texture and lighting disentanglement, which is on a different track to our 3D-Adapter.\\n\\n Disagree. Since the key contribution of this work lies in proposing a method to enforce multi-view consistency, it is crucial to evaluate the method\\u2019s performance beyond the diffuse material regime, particularly for texture generation tasks. Limiting the scope to diffuse materials seems restrictive given current advancements, which already demonstrate that PBR or BRDF photometric properties can be obtained effortlessly from pre-trained image diffusion models. Thus, as long as the authors want to truely evaluate the multi-view consistency effect, non-diffusion material cases also should have been included for comparison.\\n\\n\\n\\nThis reviewer also understands that, given the limited rebuttal time, it might be hard to reflect these comparisons.\\nOn the other hand, these missing comparisons could be considered critical or not, depending on the reviewers (as Reviewer `TS5P` pointed out).\\nI'd like to hear more opinions from the other reviewers.\"}", "{\"comment\": \"Comments on the questions:\\n\\n> **What is the rationale behind the significant drop in metrics in Table 1 when adding IO sync and GRM finetuning to the two-stage baseline?**\\n\\nAfter adding I/O sync (A1), the problem is that the original GRM is incompatible with coarse intermediate outputs, and will lead to severe error accumulation. After finetuning the GRM (A2), the appearance metrics becomes worse but the geometry metric MDD improves significantly. The reason is explained in Appendix A2 (edited):\\n\\n> While I/O sync works reasonably on our texture generation benchmark, our text-to-3D model using I/O sync (A2 in Table 1 and Fig. 3) exhibits significant quality degradation due to mode collapse. We believe the main reasons are twofold. First, the base model Instant3D generates a very sparse set of only four views, which are hard to synchronize. Second, our fine-tuned GRM reconstructor is trained using the depth loss to suppress surface fuzziness, which has a negative impact when its sharp renderings $\\\\tilde{\\\\mathbf{x}}_t$ are used as diffusion output. This is because a well-trained diffusion model should actually predict blurry outputs $\\\\hat{\\\\mathbf{x}}_t$ in the early denoising stage as the mean of the distribution $p(\\\\mathbf{x}_0|\\\\mathbf{x}_t)$. Only in the late stage should $\\\\hat{\\\\mathbf{x}}_t$ be sharp and crisp, as shown in Fig. 8.\\n\\nIn light of this, in the revised PDF, we added a stronger I/O sync baseline (A3) with the dynamic blending technique.\\n\\n> As shown in Table 1 and Fig. 3, dynamic I/O sync demonstrates significant improvements in visual quality over vanilla I/O sync. Its MDD score indicates that its geometry consistency lies between vanilla I/O sync and 3D-Adapter. However, the visual quality of dynamic I/O sync is still clearly below that of the two-stage method and 3D-Adapter. While it is possible to tune a better blending weight $\\\\lambda_t^\\\\text{sync}$ , we believe it is very difficult to reduce the gap due to the aforementioned challenges brought by our model setup.\", \"edited\": \"LGM and GRM are already the latest ECCV 2024 SOTAs. In the revised PDF, we also added a new comparison (3DTopia). In general, text-to-3D is a less crowded field compared to image-to-3D, thus more SOTAs are found in the image-to-3D benchmark (Table 3).\\n\\n> **How does the output quality compare to I/O sync + tiled ControlNet, and how critical is ControlNet's role during denoising relative to the 3D prior that could be introduced simply by depth conditioning in IO mode?**\\n\\nWe added an experiment in Table 5 combining 3D-Adapter with I/O sync for texture generation. As expected, this combination performs worse than 3D-Adapter alone due to error accumulation and blur introduced by I/O sync (shown in Figure 5 and indicated by the drop in Aesthetic score). We tested this combination in the text-to-texture setup because I/O sync performs well here, whereas it significantly degrades text-to-3D results, making such a combination impractical. Again, this highlights the importance of testing 3D-Adapter across different setups, even if it adds complexity.\"}", "{\"comment\": \"Thank you for the comments! We have uploaded a revised PDF, which hopefully addresses some of the clarity issues. The following are comments in response to the weaknesses.\\n\\n> **By integrating depth conditioning via ControlNet...**\", \"correction\": \"The ControlNet is not trained from scratch but fine-tuned from the base model encoder, following standard ControlNet training. This preserves most of the model's knowledge and doesn't require extensive training. Additionally, for Instant3D and Zero123++, pretrained ControlNets are unavailable. We attempted to use existing Stable Diffusion ControlNets, but they performed poorly.\"}", "{\"comment\": \"I'd like to thank the authors for their clarifications. After considering the issues raised by other reviewers and authors' responses, I am upgrading my ratings for \\\"contribution\\\" (to \\\"fair\\\") and my overall assessment to 5.\\nHowever, based on the results, I still find the contribution to be of limited significance, and have doubts about whether this setup will be built upon in the research community. To reflect the uncertainty in this judgment, I am lowering the confidence of my review to 4.\"}", "{\"metareview\": \"The paper presents a method to incorporate depth prior into the framework of 3D generation. The paper receives mixed ratings from the reviewers. The reviewers have some concerns in terms of several perspectives. First, the reviewer argues that the novelty of the presented work is not elaborated clearly. The geometric consistency issue seems to be the key contribution that the paper targets, however, it is a bit unexpected that the depth prior is involved for 3D generation, and also, this step with diffusion will bring additional overhead as mentioned by one of the reviewers. Another issue criticized by one reviewer is that the proposed geometry consistency regularization seems to not improve the performance clearly, and also some visualizations raised potential concerns about implementation correctness. Finally, quite a few comments regarding the presentation quality were raised by the reviewers. Based on these critical comments, AC decided to reject this paper for this time.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers asked for clarification of key contributions and some important details and requested more experiments to show the effectiveness. The reviewers are not fully satisfied with the rebuttal of the authors.\"}", "{\"summary\": \"This submission introduces 3D-Adapter, a novel module to address challenges in 3D geometry consistency synchronization in multi-view image diffusion models.\\nThis submission identifies the fundamental limitation of existing synchronization methods working on input and output domains. Then, the authors propose to add a synchronization mechanism to the intermediate feature level (like ControlNet) to encourage 3D consistency in existing multi-view image diffusion models, instead of input and output levels.\\nThe key idea of the mechanism is that, at each denoising step, intermediate latent representations are decoded, 3D-reconstructed, and projected back to 2D representations (RGBD), where improved 3D consistency is re-encoded, called 3D feedback augmentation.\", \"the_study_explores_two_variants_of_the_3d_adapter\": \"a feed-forward version using a Gaussian Splatting-based model (GRM) with fine-tuning and a training-free version utilizing neural fields and meshes depending on application scenarios. The extensive experiments across text-to-3D, image-to-3D, text-to-texture, and text-to-avatar tasks demonstrate that 3D-Adapter improves generation quality in existing pre-trained models like Instant3D and Zero123++.\\n\\n\\nOverall, the submission is deemed to be well-prepared and positioned.\\n\\nThe proposed method itself would not be very innovative, but this reviewer found that the motivation and its theory behind the motivation are interesting. Although the researchers have been empirically aware of the limitation of the I/O sync method, the authors clearly presented the gap, which grounds the motivation of the proposed approach well.\\n\\nHowever, some weaknesses remain that may improve the submission further.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"Clear contributions and positioning among existing research\", \"Clear demonstration of the limitation of I/O sync\", \"Resource-efficient design (training data efficient and training resource-efficient)\", \"Demonstration of various applications\", \"Noticeable improvements in terms of geometry and visual fidelity\", \"Clarity and presentation. The paper is well-organized and clearly presented\"], \"weaknesses\": \"- **Lack of information on computation and memory overhead for competing methods**\\n\\n Although the authors briefly address computaitonal overhead in \\\"Text-to-Texture Generation\\\" and \\\"Text-to-Avatar Generation\\\" applications, the authors do not explicitly compare overheads in \\\"Text-to-3D\\\" and \\\"Image-to-3D\\\" cases. While it is not neccessary to win every competing method, providing these comparisons would help readers better understand the position of this work in terms of computation demands.\\n\\n- **Missing related work**\\n\\n The paper is well-position among the existing work, but some recent works are missing. Since this field is very competitive and timely, it would be beneficial to acknowledge recent developments as well so that readers gains a clearer understanding of the current landscape and to discern similarities and differences. \\n\\n [C1] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion, CVPR 2024\\n\\n [C2] LRM: Large Reconstruction Model for Single Image to 3D, ICLR 2024\\n\\n- **Lack of comparison with recent methods in Sec. 5.4 \\\"Text-to-Texture Generation\\\"**\\n\\n Texture generation researches have been developed to enhance multi-view consistency quite well. A few samples of the following works are not concurrent but the past researches. The authors missed these very relevant work and only compared with past researches. \\n\\n [C3] DreamMat: High-quality PBR Material Generation with Geometry- and Light-aware Diffusion Models, SIGGRAPH 2024\\n\\n [C4] TexPainter: Generative Mesh Texturing with Multi-view Consistency, SIGGRAPH 2024\\n\\n [C5] Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering, CVPR2024\\n\\n [C6] Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models, CVPR2024\\n\\n In addition, for the Text-to-3D generation, [C1] released their code, but the authors did not compare with this closely related work.\\n\\n\\n- **Confusing notation of the proposed method in the tables**\\n\\n 3D-Adaptor is not a standalone model. The notation \\\"3D-Adaptor\\\" could mislead readers to perceive it as a standalone model. The authors are recommended to change the notations of the proposed method across all the parts: e.g., \\\"3D-Adaptor (ours)\\\" => \\\"Instant3D + 3D-Adaptor (ours)\\\" to clarify the integration.\", \"questions\": [\"The analysis in Appendix A helps to clarify the motivation of this work. On the other hand, the result is something that has been known in the community. Have any similar results been discussed previously in different domains? It would be convincing to add references to discuss consistency across different works in different applications.\", \"How much training data should be used to train GRM? As stated in the training details of Sec. 4.1, the authors fine-tuned 2000 iterations with 16 objects in a single batch, i.e., 32000 objects. Considering that a single object yields many views, it seems to be a lot for fine-tuning. What is the minimal training data to make the proposed method work? This question can also be rephrased as why the authors picked 2000 and 4000 iterations for the Instant3D and Zero123++ cases, respectively.\", \"Extending analysis: The authors showed why I/O sync may be bad, which is interesting. Then, another question naturally arises: why ControlNet-like feature addition (feedback augmentation) used in this work is effective? It is more interesting because the proposed method also does not guarantee a reduction of the gap in Eq. (7).\", \"Lines 286-288: The authors mentioned two ControlNets for superresolution and depth. Does it mean the authors use both ControlNet or is the depth ControlNet replaced with the super-resolution ControlNet? Then, how can we feed depth rendering to the superresolution ControlNet encoder without training?\", \"Line 287: Please provide the URL for the superresolution ControlNet as a footnote to give proper credit and to enhance the reproducibility.\", \"Line 485 - \\\"texture field optimization\\\": What is texture field optimization referred to? Is the texture field optimization applicable to the other competing methods, TEXTure, Text2Tex, and SyncMVD, in Table 6? Was it applied to them already?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"There is no explicit ethical concern regarding the method itself.\\nHowever, there is a shared potential ethics concern around responsible use as with any generative model producing realistic 3D content.\\nThe method could be misused in unconsented content reproduction and violating copyrights. \\n\\nThe authors are encouraged to include a paragraph of Ethics Statement (at the end of the main text before references) to address potential concerns as instructed in the ICLR author guide. A discussion of the potential implications could enhance the paper\\u2019s contribution to responsible AI practices.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the timely response!\\n\\nRegarding the clarity issues, we wil continue to revise the paper based on the comments from other reviewers. We would appreciate it if you could provide more detailed suggestions on writing improvement.\\n\\nThe claims about the I/O sync methods are analyzed in depth in the Appendix (also revised and updated). Please let us know if you find any part of it unclear.\", \"edit\": \"We further revised the text to clarify the error accumulation issue, and added a reference to the paper *On Error Propagation of Diffusion Models*:\\n\\n> Diffusion model sampling is sensitive to error accumulations (Li & van der Schaar, 2024). I/O sync methods insert 3D reconstruction and rendering operations into the denoiser in a way that disrupts the original model topology and introduces errors during each denoising step (unless reconstruction and rendering are perfect).\\n\\nThe ControlNet's effectiveness can be directly reflected by comparing B0 and C0 in table 1 (both use the same finetuned GRM). For texture and avatar generation, disabling ControlNet feedback is equivalent to the two-stage baseline and we have shown the differences. Reviewer kMh5 has asked about the comparison between ControlNet and latent fusion, and we have clarified that latent fusion is equivalent to our newly added dynamic I/O sync baseline. Please let us know if you have questions about these validations.\"}", "{\"comment\": \"**Update on EpiDiff:**\\n\\nWe managed to test EpiDiff, using GRM to reconstruct a 3DGS from their generated views for a fair comparison. The quantitative results are added in the revised paper (which are weaker than many other methods we have tested).\\n\\n**Analysis of EpiDiff:**\\n\\nDespite its use of epipolar attention and nearby view aggregation, EpiDiff-generated views exhibit significant flickering, leading to poor 3D consistency. Below are visualizations of the sampled views:\\n* https://ibb.co/NWSyHZj\\n* https://ibb.co/cL82YWB\\n* https://ibb.co/L6hmkqn\\n* https://ibb.co/r2fswnN\\n\\nThis inconsistency causes GRM to produce severe floaters, resulting in poor rendered images:\\n* https://ibb.co/zSy5QrT\\n* https://ibb.co/LJt6XDF\\n\\nNotably, the EpiDiff paper evaluates only novel view generations rather than rendered views from an actual 3D representation, which we believe is insufficient to demonstrate 3D consistency. In contrast, 3D-Adapter explicitly employs 3D representations and renderings as strong constraints for 3D consistency. This is why we think our approach differs fundamentally from EpiDiff.\"}", "{\"summary\": \"The authors propose 3D-adapter, a method to improve quality and 3d-consistency of existing text-to-multiview and text-to-image models by having a branch which reconstructs an object in 3D, then applies trained controlnet on rendered rgb and depth.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Improved quality in downstream tasks over previous state-of-the-art\", \"Extensive final metrics in text-to-3d evaluation\"], \"weaknesses\": [\"The quality improvement in the main text-to-3D task is 1) marginal and 2) visuials are not convincing enough to justify complicating the training pipeline. For instance, in Fig. 1, image-to-3D visuals are comparable b/w 3D-Adapter and \\\"two-stage pipeline\\\"\", \"The paper lacks significant novelty, as the use of 3D representations for synchronization has been explored before (e.g. NerfDiff). The main contribution appears to be the placement of the adapter in the parallel branch rather than at the input/output stage of the diffusion UNet. However, this seems more like a technical choice, as is the decision to train ControlNet on intermediate outputs.\", \"Some visuals (e.g. Fig. 3, column 5) raise concerns about the correctness of the implementation.\"], \"questions\": \"1. Please clarify the following:\\n > However, 3D reconstruction and rendering are lossy operations that disrupt residual connections.\\n\\n Since residual connections shouldn\\u2019t be disrupted if 3D reconstruction is applied before or after the UNet denoising stage, I\\u2019m unclear why this is described as a problem. Could you explain?\\n\\n2. (line 142)\\n\\n> A common issue with two-stage approaches is that existing reconstruction methods, often designed for or trained under conditions of perfect consistency, lack robustness to local geometric inconsistencies. This may result in floaters and **blurry textures**.\\n\\nPlease provide evidence that two-stage approaches specifically suffer from blurry textures.\\n\\n3. (line 198)\\n\\n> assuming linearity\\n\\nCould you clarify what you mean by \\\"linearity\\\" in this context?\\n\\n4. Regarding concerns over some visuals: Could you provide a detailed explanation of the I/O-sync implementation in Tables 1 and 5? Table 5 shows a slight improvement when using I/O-sync in the \\\"baseline\\\" model, but in Table 1, there's a significant drop from A0 to A1. The visuals in column 5 appear broken and clearly worse than those in, for example, LGM, yet the CLIP score is significantly higher. Please double-check these evaluations.\\n\\n5. Could you justify the use of ControlNet? How does it compare to using pure latent fusion?\\n\\n6. \\n> During the sampling process, the adapter performs NeRF optimization for the first 60% of the denoising steps. It then converts the color and density fields into a texture field and DMTet mesh, respectively, to complete the remaining 40% denoising steps. All optimizations are incremental, meaning the 3D state from the previous denoising step is retained to initialize the next. As a result, only 96 optimization steps are needed per denoising step. Alternatively, for texture generation only, multiview aggregation can be achieved by backprojecting the views into UV space and blending the results according to visibility\\n\\nIt's quite complex and specific training process, it would be great to hear about the reasoning/ablations behind these numbers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the reply. The comments you read are outdated and have been updated. Please check the latest version on openreview.\", \"regarding_epidiff\": \"> EpiDiff is an image-to-3D model. We have tried the official code, but our 48GB A6000 GPU failed to run the inference, as the author suggested using an 80GB GPU. The core contribution of EpiDiff is improving the network architecture using epipolar attention, while our work focus on leveraging explicit 3D reconstruction and rendering, which is on a different track to EpiDiff.\\n\\nWe have added CRM and InstantMesh as suggested by Reviewer TS5P. Note that the best methods we have compared with (One2345++, InstantMesh, GRM) all use Zero123++ as the multi-view model, stressing the importance of the base model. EpiDiff adopts a entirely new base model, which introduces lots of uncertainties. For text-to-3D, we will further add 3DTopia for comparison.\\n\\nPBR texture generation is a non-trivial task. While it is entirely feasible to use pretrained diffusion models to generate PBR materials, this requires careful handling of PBR differentiable renderers and lighting. Therefore, adapting our own method to generate PBR texture is beyond the scope of our work (which already has a very large scope). **Comparing our methods with PBR texture methods isn't really meaningful because our method (and the existing baselines we have compared) renders the lightingless albedo while the PBR methods need to render their results under specific lightings, so the condition isn't the same.**\\n\\n> Thus, mentioning that it is not the authors' priority is irresponsible. \\n\\nWe are being very responsible not to mislead readers with unfair comparisons. Even if we finally get the comparison done, the result won't indicate which one of EpiDiff and 3D-Adapter is superior, because the base model we use are very different.\"}", "{\"summary\": \"The paper introduces a novel plug-in module, 3D-Adapter, which enhances multi-view diffusion models to improve the quality of 3D geometry generation. By integrating 3D feedback augmentation, it infuses 3D geometry awareness into pretrained image diffusion models. The 3D-Adapter operates in two main variants: a fast feed-forward version and a flexible training-free version using neural fields and meshes. The proposed method addresses the limitations of previous two-stage approaches by maintaining the original network topology and augmenting the base model through feature addition. Extensive experiments show improvements across various tasks, including text-to-3D, image-to-3D, text-to-texture, and text-to-avatar generation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The 3D feedback augmentation approach and the architecture's integration with diffusion models present a novel contribution that extends beyond typical 2D-to-3D adaptation techniques.\\n2. The figures effectively demonstrate the qualitative improvements of the proposed approach, particularly in challenging cases.\\n3. The broad applicability of 3D-Adapter to text-to-3D, image-to-3D, and text-to-texture tasks indicates potential for further research and practical applications.\", \"weaknesses\": \"1. **Claims About Existing Methods**: The assertion (L046-L053) that I/O sync methods degrade residual connections and cause texture quality issues is confusing and not adequately supported. More specific explanations with these methods are necessary.\\n2. **Inadequate Definitions**: The paper lacks clarity on the a) components of the 3D-Adapter (I understand that it includes VAE Decoder, ControlNet, and 3D Reconstruction Model) and b) the \\u201cI/O sync\\u201d baselines compared in Fig.1. Explicitly listing the modules included and the compared methods or baseline settings would aid understanding.\\n3. **Comparison Gaps**: The image-to-3D generation experiments do not include comparisons with CRM[1], SV3D[2], InstantMesh[3]. The text-to-texture experiments do not include metrics like FID, KID or comparisons with some established methods such as Paint3D[4] and FlashTex[5].\\n4. **Limitation Discussion**: Although the paper mentions the concerns about inference efficiency and shows the metric in the Appendix, more discussions about training efficiency would strengthen the paper, because it requires fine-tuning reconstruction model while keeping the multi-view diffusion model in memory.\\n5. **Insufficient References and Discussion of Prior Work**: a) The paper lacks citations for relevant multi-view generation methods such as Free3D[6], EpiDiff[7], and SPAD[8]. b) Additionally, it does not discuss highly relevant works like IM3D[9] and Carve3D[10]. c) While Ouroboros3D[11] and Cycle3D[12] may be considered concurrent work, it would still be valuable to include a discussion, as there are differences between these methods and the 3D-Adapter that merit discussion.\\n\\n[1] CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model\\n\\n[2] SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion\\n\\n[3] InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models\\n\\n[4] Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models\\n\\n[5] FlashTex: Fast Relightable Mesh Texturing with LightControlNet\\n\\n[6] Free3D: Consistent Novel View Synthesis without 3D Representation\\n\\n[7] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion\\n\\n[8] SPAD : Spatially Aware Multiview Diffusers\\n\\n[9] IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation\\n\\n[10] Carve3D: Improving Multi-view Reconstruction Consistency for Diffusion Models with RL Finetuning\\n\\n[11] Ouroboros3D: Image-to-3D Generation via 3D-aware Recursive Diffusion\\n\\n[12] Cycle3D: High-quality and Consistent Image-to-3D Generation via Generation-Reconstruction Cycle\", \"questions\": \"1. Could the authors elaborate on how 3D-Adapter's 3D feedback augmentation differs fundamentally from existing I/O sync techniques? Specifically, in the claim made around L051, the paper categorizes methods such as Nerfdiff, DMV3D, and VideoMV under synchronizing the denoised outputs, suggesting that these approaches disrupt residual connections and result in poor texture quality. This assertion is somewhat confusing and not evidently supported. For example, DMV3D uses a 3D reconstruction model as a multi-view denoiser, and VideoMV employs a stage-wise re-sampling strategy to fuse 3D reconstruction information. It is unclear how these methods would interfere with the residual connections in the network design. The claim requires further justification or supporting evidence to be convincing.\\n2. Additionally, it is recommended that the authors consider showing intermediate results during the denoising process, such as outputs from the 3D reconstruction model at various stages. This could help readers better understand the contributions of the proposed method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Update: Epidiff also does not provide a full 3D generation solution in their official codebase (https://github.com/huanngzh/EpiDiff). Only the multiview diffusion model is provided, which is insufficient to reproduce the 3D generation results. All the methods we have compared with are complete 3D generation solutions (with NeRF, GS, or Mesh representations), and the final rendered images are evaluated. Again, this shows that the focus of EpiDiff is to provide a better multi-view model, instead of building a complete 3D generation pipeline.\"}", "{\"summary\": \"The paper introduces 3D-Adapter, a plug-in module aimed at improving 3D consistency in multi-view diffusion models for 3D generation. By integrating a combination of 3D representation and ControlNet for depth conditioning into the denoising framework, 3D-Adapter enhances 3D structure coherence across views without modifying the core model's topology / weights. The authors present two 3D-Adapter variants: a fast feed-forward method using Gaussian splatting (GRM) and a flexible, training-free approach utilizing NeRF optimization. Extensive evaluations across text-to-3D, image-to-3D, and text-to-texture tasks demonstrate that 3D-Adapter improves geometry quality and coherence over prior methods, providing a robust solution for multi-view 3D generation tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"By integrating depth conditioning via ControlNet, the 3D-Adapter provides a straightforward yet effective solution to incorporate 3D priors within 2D diffusion frameworks as a part of denoising process.\\n\\nThe paper presents extensive quantitative and qualitative evaluations across multiple configurations and baselines.\\n\\nThe proposed 3D-Adapter outperforms several prior methods in 3D generation quality, demonstrating improvements on widely used metrics like CLIP score and aesthetic score.\", \"weaknesses\": \"The paper\\u2019s scope is broad, which obscures the clarity of its main contributions and makes it challenging to pinpoint specific innovations with reusable community value. Instead of covering multiple branches, e.g., GRM vs. NeRF optimization, a more focused examination on a single GRM pipeline could provide deeper insights, making the contribution more accessible and actionable for the community. Some comparisons and insights appear irrelevant, e.g. the text-to-avatar section, as the paper primarily addresses general 3D consistency issues rather than avatar-specific issues. This creates confusion regarding the paper\\u2019s core contributions.\\n\\nIntroducing per-step 3D \\u2192 depth \\u2192 ControlNet prediction comes with computational overhead. The paper could explore strategies to mitigate this, such as reducing the frequency of applying the adapter or skipping initial steps. Since per-step VAE decoding is probably the main bottleneck, adopting a lightweight VAE alternative, e.g. TinyAE, could yield substantial speed gains.\\n\\nSome design choices around the ControlNet addition appear understudied. Introducing depth ControlNet itself may add a substantial 3D prior, but this effect is not adequately considered and is instead attributed mainly to ControlNet's role in denoising. Additionally, in the GRM branch, ControlNet is trained from scratch, which seems counterintuitive and is not supported with ablation studies.\\n\\nThe paper was fairly difficult to read and navigate, primarily due to its broad scope and, to a lesser extent, its writing style.\", \"questions\": \"(1) What is the rationale behind the significant drop in metrics in Table 1 when adding IO sync and GRM finetuning to the two-stage baseline? Also GRM finetuning does not appear to be evaluated in the 3D-Adapter case.\\n\\n(2) The SOTA selection in Table 2 seems debatable; could the authors clarify the choice of methods over more recent approaches?\\n\\n(3) How does the output quality compare to I/O sync + tiled ControlNet, and how critical is ControlNet\\u2019s role during denoising relative to the 3D prior that could be introduced simply by depth conditioning in IO mode?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for reconsidering your assessment! While we acknowledge the uncertainty around the adoption of new methodologies, we believe our work provides a strong foundation for further exploration:\", \"**Versatility**: Our experiments span text-to-3D, image-to-3D, text-to-texture, and text-to-avatar setups, showcasing broad applicability. Other applications, such as panorama generation and 4D generation, could also benefit from improved synchronization methods like 3D-Adapter.\", \"**Ease of Integration**: 3D-Adapter maintains the base model's topology, making it plug-and-play if existing ControlNets and reconstructors are available. For example, we demonstrated its use with pretrained ControlNet and customized Stable Diffusion for SOTA texture generation\\u2014no training required, ensuring accessibility for the community.\", \"**Significance of Results**: Our experiments clearly demonstrate improvements across tasks, both visually and quantitatively, reinforcing the practical impact of our approach. Please note that 3D generation is a very competitive field, and our improvements in metrics are not trivial.\"]}", "{\"comment\": \"Thank you for the constructive comments! We have uploaded a revised PDF, which hopefully addresses some of the clarity issues. The following are comments in response to the weaknesses and questions.\\n\\n> **1. Clarity of exposition: A more comprehensive explanation of the relationship between GRM and models such as Instant3D and Zero123++ would greatly enhance the reader's understanding. It is particularly important to elucidate whether GRM builds 3D representations from the views generated by Instant3D and Zero123++ and whether fine-tuning of GRM is involved in this process.**\\n\\nTo clarify, Instant3D or Zero123++ serves as the base model, while GRM acts as the 3D reconstructor. This relationship is outlined in Section 4 and illustrated in Figure 2(c). GRM constructs 3D representations from the base model's intermediate denoised views during both fine-tuning and inference.\\n\\n> **According to Equation 2, it appears that the 3D-Adapter could potentially be applied to other I/O sync-based pretrained models. However, it is necessary to discuss whether incorporating Control Net into I/O sync pretrained models beyond GRM would yield similar results.**\\n\\nIn Table 5, we tested 3D-Adapter combined with I/O sync for texture generation (which is not based on GRM but texture backprojection). As expected, this combination underperforms compared to 3D-Adapter alone due to error accumulation and blur introduced by I/O sync (see Figure 5 and the drop in Aesthetic score). \\n\\n> **\\u2026 and why Training Phase 1 is essential. If Training Phase 1 is a crucial step for optimizing the performance of 3D-Adapter, including GRM, it should be examined whether this phase is equally necessary when applied to other models.**\", \"edited\": \"Thank you for pointing this out. We have added visual comparisons in the revised Appendix (Fig. 10).\\n\\n> **Please include a dedicated pipeline figure specifically for the process described in Section 4.2 (line 289) to enhance clarity.**\\n\\nThank you for the suggestion. We will add it in our next revision.\\n\\n> **What are the results of using the optimization-based 3D-Adapter without additional conditioning.**\\n\\nThank you for the good question. This will lead to the Janus multi-face problem, since 3D-Adapter alone does not guarantee high-level global semantic consistency.\"}", "{\"comment\": \"Comments on the questions (part 1):\\n\\n> **Since residual connections shouldn't be disrupted if 3D reconstruction is applied before or after the UN et denoising stage, I'm unclear why this is described as a problem. Could you explain?**\\n\\n(Edited) We acknowledge that our use of \\u201cresidual connections\\u201d may be confusing, so we have revised the text for better clarity:\\n\\n> Diffusion model sampling is sensitive to error accumulations (Li & van der Schaar, 2024). I/O\\nsync methods insert 3D reconstruction and rendering operations into the denoiser in a way that\\ndisrupts the original model topology and introduces errors during each denoising step (unless\\nreconstruction and rendering are perfect).\\n\\n> **Please provide evidence that two-stage approaches specifically suffer from blurry textures.**\\n\\nThank you for pointing this out. This is our writing issue. It should be \\\"texture seams\\\" rather than \\\"blurry textures\\\". We have revised the text:\\n\\n> A common issue with two-stage approaches is that existing reconstruction methods, often designed for or trained under conditions of perfect consistency, lack robustness to local geometric inconsistencies. This may result in floaters and **texture seams**.\\n\\nTexture seams are very noticeable in the text-to-texture and text-to-avatar results.\\n\\n> **Could you clarify what you mean by \\\"linearity\\\" in this context?**\", \"we_have_added_a_brief_explanation_in_our_revised_appendix\": \"> When performing diffusion ODE sampling using the common Euler solver, a linear input sync operation **(e.g., linear blending or optimizing using the L2 loss)** is equivalent to...\\n\\nTo clarify, this means that the synchronized output is a linear combination of pre-synchronization views. This assumption made to simplify theoretical analysis. While some synchronization methods (e.g., our GRM-based I/O sync) are not strictly linear, the mode collapse issue from score averaging still persists.\\n\\n> **Regarding concerns over some visuals: Could you provide a detailed explanation of the I/O-sync implementation in Tables 1 and 5? Table 5 shows a slight improvement when using I/O -sync in the \\\"baseline\\\" model, but in Table 1, there's a significant drop from AO to A1.**\\n\\nTable 5 focuses on texture generation. I/O sync is particularly effective for this task, since the depth ControlNet already provides strong conditioning, making view synchronization relatively easy. Notably, TexPainter and SyncMVD (shown in Table 6) are typical I/O sync methods for texture generation, though implemented differently from ours (still, our own implementation achieves better quality and efficiency).\\n\\nIn Table 1, A1 represents the I/O sync baseline using the original GRM without finetuning. The unfinetuned GRM is incompatible with coarse intermediate outputs, and will lead to severe error accumulation and poor geometry with floaters (reflected in the high MDD metric). After finetuning the GRM (A2), the MDD metric improves substantially, even outperforming the 3D-Adapter in geometry. The drop in appearance metrics following finetuning is explained in the additional analysis provided in Appendix A2 (edited):\\n\\n> While I/O sync works reasonably on our texture generation benchmark, our text-to-3D model using I/O sync (A2 in Table 1 and Fig. 3) exhibits significant quality degradation due to mode collapse. We believe the main reasons are twofold. First, the base model Instant3D generates a very sparse set of only four views, which are hard to synchronize. Second, our fine-tuned GRM reconstructor is trained using the depth loss to suppress surface fuzziness, which has a negative impact when its sharp renderings $\\\\tilde{\\\\mathbf{x}}_t$ are used as diffusion output. This is because a well-trained diffusion model should actually predict blurry outputs $\\\\hat{\\\\mathbf{x}}_t$ in the early denoising stage as the mean of the distribution $p(\\\\mathbf{x}_0|\\\\mathbf{x}_t)$. Only in the late stage should $\\\\hat{\\\\mathbf{x}}_t$ be sharp and crisp, as shown in Fig. 8.\\n\\nIn light of this, in the revised PDF, we added a stronger I/O sync baseline (A3) with the dynamic blending technique.\", \"edit\": \"Please also note that I/O sync itself is not typically the first choice for 3D generation tasks. For example, in text-to-3D and image-to-3D tasks, most state-of-the-art methods (e.g., GRM, CRM, InstantMesh) employ two-stage approaches. I/O sync is more commonly used in texture generation tasks (e.g., SyncMVD, TexPainter), where our own I/O sync implementation outperforms others. Here, its drawbacks are thoroughly analyzed both theoretically in Appendix A.1 (where the linearity assumption holds due to linear texture blending) and empirically, as demonstrated in the experiments in Table 5.\"}", "{\"comment\": \"For the EpiDiff, well.. this reviewer strongly disagrees about `while our work focus on leveraging explicit 3D reconstruction and rendering, which is on a **different track** to EpiDiff.`\\nThe level and way in which multi-view consistency is introduced are comparable, where both methods propose modules conditioning multi-view consistency. Thus, closely related work. \\n\\nFor the PBR texture, I know the limitations of the proposed method the authors just mentioned (the proposed method cannot deal with view-dependent photometric characteristics due to the requirement of 3D reconstruction from multi-view images), and that's the point I pointed out in the initial review. \\nThat is something that needs to be discussed as a limitation, not something that can be rebutted.\\n\\nConsidering missing important references related to multi-view consistency, texture generation applications, and theory, the way the reviewer approaches could appear to be faithless. (While I didn't mention it intentionally, the theory of Appendix A is straightly deducible from the reference [Bradley & Nakkiran]. In this case, the authors should have cited that concurrent work at the initial submission.)\"}", "{\"comment\": \"Thank you for the constructive comments. We have uploaded a revised PDF and the following are comments in response to the weaknesses.\\n\\n> **Claims About Existing Methods: The assertion (L046-L053) that 1/0 sync methods degrade residual connections and cause texture quality issues is confusing and not adequately supported. More specific explanations with these methods are necessary.**\\n\\n(Edited) We acknowledge that our use of \\u201cresidual connections\\u201d may be confusing, so we have revised the text for better clarity:\\n\\n> Diffusion model sampling is sensitive to error accumulations (Li & van der Schaar, 2024). I/O sync methods insert 3D reconstruction and rendering operations into the denoiser in a way that disrupts the original model topology and introduces errors during each denoising step (unless reconstruction and rendering are perfect).\\n\\n> **Inadequate Definitions: The paper lacks clarity on the a) components of the 3D-Adapter (I understand that it includes VAE Decoder, ControlNet, and 3D Reconstruction Model) and b) the \\\"I/O sync\\\" baselines compared in Fig.1. Explicitly listing the modules included and the compared methods or baseline settings would aid understanding.**\", \"edit\": \"Your understanding is correct. In the revised PDF, we updated Figure 2 to include VAE encoders and decoders.\\n\\n> **Comparison Gaps: The image-to-3D generation experiments do not include comparisons with CRM[1], SV3D[2], InstantMesh[3].**\\n\\nThank you for the suggestions. We have added CRM and InstantMesh to the comparisons. Notably, the strongest methods (3D-Adapter, GRM, InstantMesh, One2345++) all use Zero123++ as their base model. To the best of our knowledge, SV3D is a comparatively weaker base model.\\n\\n> **The text-to-texture experiments do not include metrics like FID, KID**\\n\\nComputing FID and KID metrics requires a reference dataset, but none of the text-to-texture models we evaluated are trained on domain-specific data, as they adapt single-view image diffusion models for texture generation. Therefore, a standardized reference dataset does not exist. Using Objaverse for this purpose is unsuitable because (a) most Objaverse objects have poor textures, and (b) the evaluated methods do not disentangle texture from lighting, requiring non-standardized lighting choices for reference data creation.\\n\\n> **Comparisons with some established methods such as Paint3D[4] and FlashTex[5].**\", \"edited\": \"Thank you for providing the suggested references. All the references have been added.\", \"remarks_on_some_of_the_related_work\": [\"Ouroboros3D is a concurrent work and the high-level design is very close to our 3D-Adapter. The key difference is that Ouroboros3D feeds the rendering to the next denoising timestep, while ours operates within the current timestep. In fact, we have tested the other design as an early iteration of 3D-Adapter, and the results were slightly weaker on the text-to-texture benchmark (CLIP=26.12, Aesthetic=4.83, vs our CLIP=26.40, Aesthetic=4.85).\", \"Free3D is a novel view model, its role is the same as Instant3D or Zero123++. EpiDiff, SPAD also focus on improving novel view models through epipolar attention. These work did not evaluate the rendered views from 3D representations. In contrast, our work focuses on imposing explicit 3D reconstruction and rendering. In response to Reviwer dfQt, we have tested the official code of EpiDiff, and the generated views exhibit poor consistency.\", \"Cycle3D is an I/O sync method for image-to-3D. It also produces slightly blurry appearances on the backside of objects.\", \"IM3D is two-stage generation but with repeated SDEdit-like refinements to the rendered views, which is an orthogonal contribution to our 3D-Adapter.\", \"Paint3D and FlashTex focuses on PBR texture generation, which is beyond the scope of this work (we acknowledge this as a limitation in the revised paper).\"]}" ] }
C0Boqhem9u
LinBridge: A Learnable Framework for Interpreting Nonlinear Neural Encoding Models
[ "Xiaohui Gao", "Yue Cheng", "Peiyang Li", "Yijie Niu", "Yifan Ren", "Yiheng Liu", "Haiyang Sun", "Zhuoyi Li", "Weiwei Xing", "Xintao Hu" ]
Neural encoding of artificial neural networks (ANNs) aligns the computational representations of ANNs with brain responses, providing profound insights into the neural basis underpinning information processing in the human brain. Current neural encoding studies primarily employ linear encoding models for interpretability, despite the prevalence of nonlinear neural responses. This leads to a growing interest in developing nonlinear encoding models that retain interpretability. To address this problem, we propose LinBridge, a learnable and flexible framework based on Jacobian analysis for interpreting nonlinear encoding models. LinBridge posits that the nonlinear mapping between ANN representations and neural responses can be factorized into a linear inherent component that approximates the complex nonlinear relationship, and a mapping bias that captures sample-selective nonlinearity. The Jacobian matrix, which reflects output change rates relative to input, enables the analysis of sample-selective mapping in nonlinear models. LinBridge employs a self-supervised learning strategy to extract both the linear inherent component and nonlinear mapping biases from the Jacobian matrices of the test set, allowing it to adapt effectively to various nonlinear encoding models. We validate the LinBridge framework in the scenario of neural visual encoding, using computational visual representations from CLIP-ViT to predict brain activity recorded via functional magnetic resonance imaging (fMRI). Our experimental results demonstrate that: 1) the linear inherent component extracted by LinBridge accurately reflects the complex mappings of nonlinear neural encoding models; 2) the sample-selective mapping bias elucidates the variability of nonlinearity across different levels of the visual processing hierarchy. This study not only introduces a novel tool for interpreting nonlinear neural encoding models but also provides novel evidence regarding the distribution of hierarchical nonlinearity within the visual cortex.
[ "Nonlinear encoding models", "Jacobian matrix", "Linear inherent component", "Mapping bias" ]
Reject
https://openreview.net/pdf?id=C0Boqhem9u
https://openreview.net/forum?id=C0Boqhem9u
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qtI7lRjelM", "ZUWgENX2G8", "Sy9DXZkx33", "GQoX6XjpmO", "CynS8Br1Kz", "3vLpsu7TU7", "2akahtaNcD" ], "note_type": [ "official_review", "decision", "meta_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1730647519965, 1737523584703, 1734850571761, 1731217394392, 1730664441869, 1730699160869, 1730580283039 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3594/Reviewer_FTUv" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3594/Area_Chair_Tign" ], [ "ICLR.cc/2025/Conference/Submission3594/Reviewer_dkMG" ], [ "ICLR.cc/2025/Conference/Submission3594/Reviewer_n9vK" ], [ "ICLR.cc/2025/Conference/Submission3594/Reviewer_9nLx" ], [ "ICLR.cc/2025/Conference/Submission3594/Reviewer_rUdW" ] ], "structured_content_str": [ "{\"summary\": \"The manuscript proposes a method called LinBridge to interpret non-linear encoding models of brain activity (models that predict the output response, given an input stimulus), by distinguishing the global Jacobian matrix (the linear part of the model) and local deviations from it around each single input presented in an experiment (the \\u2018biases\\u2019 that characterize the nonlinearity). Given a pre-trained encoding model, LinBridge uses contrastive learning on low-dimensional embedding of the Jacobian matrices. The manuscript demonstrates the method by first training a simple nonlinear model (a 2 layer network with sigmoid nonlinearity, and also a control linear model), on a large fMRI dataset of brain responses to images; and then applying LinBridge to the trained model.\\n\\nThe main results are 1) The non-linear model, despite its simplicity, predicts neural activity better than the linear model. 2) In the non-linear model, the linear global component characterized by LinBridge is almost as predictive as the full model. 3) The local \\u2018biases\\u2019 add more predictive power for higher visual cortex than lower visual cortex, consistent with general knowledge that higher visual cortex computes a more nonlinear representation of the image.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The problem addressed here is important. Learned nonlinear encoding models are necessary to predict brain activity well, but they are difficult to interpret and therefore offer limited understanding of brain computation. The proposed solution is conceptually simple and, to the extent tested in this manuscript, effective. The method combines well established ideas including the Jacobian to characterize local linear behavior, low-dimensional embeddings, and contrastive learning. The pseudo-code in the Appendix also highlights the simplicity of the method. With some exceptions noted below, the manuscript is written clearly.\", \"weaknesses\": \"The nonlinear encoding model is very simple and, not surprisingly, its predictive power is relatively small (how does it compare to other, previously published non-linear models for this dataset?). Although maximizing predictive power is not the goal of this manuscript, if the nonlinearity of the model is too simple, it seems not that surprising that the simple decomposition of LinBridge will work well.\\n\\nIn other words, how much does the nonlinearity really help this particular encoding model? Figure 3 is confusing in this regard: panels a,b look indistinguishable to me (sorry, I am not that familiar with fMRI data). And panel c shows that more voxels are activated for nonlinear, but the average r-square seems even lower for nonlinear than linear (linear histograms are shifted to the right). I suspect I am not reading the plots the right way, some guidance would help the reader. \\nThen figure 4 shows that the encoding performance with the full nonlinear model is almost identical to the linear component extracted by LinBridge. Doesn\\u2019t this say that the nonlinear part of the model is not that useful? The analysis of Figure 5 partly addresses that concern, showing that the nonlinear biases are more predictive in higher visual cortex.\", \"clarity\": \"the computation of the Jacobian matrix JM should be explained clearly (not the linear and Delta terms, those are clear enough). It is the central idea of the paper, and the authors also list as a limitation that it is resource intensive, but it is not explained at all. It is conceptually simple, but I think the practical details might be important.\\nI found Figure 1 not informative. If the point is to convey that nonlinear mappings are characterized by the fact that they vary around different inputs, that is not at all conveyed by the image.\", \"questions\": \"See specific questions in weaknesses, in particular referring to Fig. 3-4, and to the details of computing the Jacobian.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper introduces a novel framework for characterizing and interpreting the relationships between artificial neural networks and fMRI imaging data. The reviewers praised the paper for proposing a conceptually simple and effective solution to a timely and important problem. Unfortunately, however, they were not persuaded that it provided a rigorous or thorough enough set of evaluations and comparisons to existing baselines, and raised concerns about the reliance on a pre-specified nonlinear feature extractor and the ability to provide new scientific insights. I regret that the paper cannot be accepted to this year's ICLR, but I wish the authors the best of luck in revising it for publication elsewhere.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised a number of concerns, including about the thoroughness of comparisons to existing baselines and the paper's ability to provide new scientific insights. The authors did not write any rebuttals.\"}", "{\"summary\": \"The authors introduce LinBridge, a framework for non-linear mapping between neural network activations and fMRI data (here, the Natural Scenes Dataset), with the goal of enhancing interpretability by factorizing the mapping into (i) a linear component that approximates the nonlinear relationship between source and target, along with (ii) a bias that captures the idiosyncratic aspects of the mapping that are \\\"sample-specific\\\" (given that in a non-linear encoding model, distinct samples may be transformed differently depending on their features). The authors motivate the need for such an approach by pointing out that the brain computes complex non-linear functions of the input, and thus, it is likely the case that non-linear mappings could provide better predictivity than linear mappings.\\n\\nLinBridge works by leveraging Jacobian matrices, which capture the model's sensitivity to input variations. It separates the consistent linear component, JMinherent, from sample-specific nonlinear biases, \\u0394JM, using a CNN to compress the Jacobian matrix and extract meaningful structure. LinBridge then applies contrastive learning with InfoNCE loss to maximize alignment between consistent mappings and minimize nonlinear bias effects. \\n\\nUsing this approach, the authors show that that a nonlinear encoding model applied to NSD yield a greater percentage of voxels with significant encoding levels than a linear encoding model with the same 2-layer architecture, but no relu non-linearity. LinBridge successfully extracts a stable linear component from nonlinear models, which achieves comparable performance across visual regions. \\n\\nAdditionally, the authors use this framework to assess the degree to which a given voxel responds linearly with respect to the input samples. They do this by measuring how the low-dimensional embedding \\u0394JMdown varies across different input samples. Then for each voxel, a first-degree polynomial is fit to the response values from \\u0394JMdown across an array of ordered samples. The coefficient of the first derivative from this linear fit reflects how much the voxel's response changes with different inputs. If the voxel's responses show minimal variation (i.e., the absolute first derivative - AFD - approaches zero), it indicates a more linear response; higher values imply stronger nonlinearity. Using this AFD metric, the authors present evidence that a greater proportion of visual voxel responses are non-linear along the visual hierarchy, from primary visual cortex to high-level visual cortex where category-selective regions are located.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors take seriously the need to balance having mappings that are maximally predictive, and also interpretable. These two desiderata are naturally in tension, so I applaud that they developed a novel approach for tackling this challenge that yielded some sensible results.\", \"The authors do a solid job citing recent literature from the encoding model/mapping literature, with only a few key omissions. The work is certainly timely.\", \"I applaud the authors' use of NSD in this study - it's the data resource most naturally suited to developing new mapping approaches like these, and is substantially higher quality and larger scale than the datasets that were used in the papers by Zhang, Li, and Cui in section 2.2.\", \"The comparison between the 2-layer linear and non-linear models is a nice clear setup for validating their proposed approach.\", \"Using the factorization of the Jacobian to study to which voxels' responses are linear vs. nonlinear (using the AFD metric) is the most interesting part of the paper in my opinion. It's a clever application of their framework that could be more broadly applicable. I would like to see the AFD analyses extended in a revised version of this work.\"], \"weaknesses\": \"I had several concerns with the paper's introduction and framing, which I will describe below. Beyond the framing of the paper and the way the authors discuss linear vs. nonlinear mappings, my main critique is that the validation and application of LinBridge did not provide much new scientific insight. I feel that the approach is mostly sound, but the impact of the paper is strongly limited by the fact that the authors do not do enough to show that LinBridge can provide new scientific insight into the nature of brain representations, to held build better theories. Relatedly, it also appears that the overall complexity and computational requirement of the approach is dramatically higher than linear encoding procedures. Based on the authors' results, it remains unclear to me whether (a) the added complexity of the procedure provides enough added predictive power to justify using LinBridge, and (b) whether factorizing the linear and non-linear components of the mapping actually provides the sort of \\\"interpretability\\\" that we seek in the field today. Folks who study explainable AI / mech-interp would use this phrase to refer more to understanding the nature of the features that are represented in DNNs and in the brain. The authors did not convince me that LinBridge gets us closer to this goal. For these reasons, I am unable to rate the paper higher than a 3, though I would be willing to revise my score upward with sufficient revision.\", \"comments_on_framing\": [\"The authors missed some important literature on linear vs. nonlinear mappings, such as [Ivanova et al. 2022](https://www.biorxiv.org/content/10.1101/2021.04.02.438248v3). This line of work should be cited. There's also an important recent [review](https://arxiv.org/abs/2310.13018) of mapping methods in neuroAI that the authors should cite. Their approach and findings should be meaningfully situated in the context of these two papers, since these more closely reflect the current state of thinking in neuroAI around this topic.\", \"The logic in Section 2 was a bit clunky. The authors critique linear encoding models by saying \\\"the inherent nonlinear dynamics of neural activity limit the predictive power and interpretability of linear models\\\". And later: \\\"especially ... in higher-order cortical areas, [the] underlying neural mechanisms may not be adequately captured by linear representations\\\". But, the linear encoding papers they cite nearly always fit the weights on top of deep and highly nonlinear DNN backbones. In other words, the authors misrepresent the typical use of linear encoding, giving the impression that the entire mapping function is linear. The more targeted and appropriate way to introduce this dichotomy would be: assuming some frozen feature extractor backbone, the trainable weights that map to brain data can either be an additional linear layer or an additional multi-layer function with some non-linearity.\", \"Another comment on section 2: the evidence the authors cite in favor of nonlinear models is insufficient to justify the blanket claim of \\\"superior performance\\\" compared to linear models. Beyond the Zhang, Li, and Cui papers, are there any that use more recent datasets (especially NSD) that directly compare nonlinear and linear mapping functions, and show the superiority of the former?\", \"Relatedly, because nonlinear functions are so much more expressive, they require much more data to fit, and often, stronger forms of regularization. For many datasets in the field, linear models may actually provide superior performance (in terms of generalization to held-out data), because they are less prone to overfitting. NSD is a great dataset for the purposes of studying nonlinear mappings, but the conclusions derived from NSD may not generalize to scenarios where fewer datapoints are available for fitting complex nonlinear functions. This is one of the many reasons that the appropriate mapping method necessarily depends on one's research goals (see Ivanova et al. 2022) and also on the specifics of one's dataset - the blanket claim that non-linear models are inherently superior because of their computational expressivity is not clearly justified given the limitations inherent in most neuro datasets.\", \"Comments on clarity, figures, results:\", \"The \\\"sample-specific\\\" idea was described in a vague way in the abstract and introduction - the authors should more clearly convey what this means. If I understand correctly, it refers to the following idea: say you have a vector of DNN activations in response to an image. No matter what that vector contains, the linear weights will apply the same transformation to it. However, in the presence of a nonlinearity and a multi-stage nonlinear mapping, the nature of the transformation post-nonlinearity will depend on the specific content of the original input vector. The logic here should be spelled out more clearly.\", \"The authors repeatedly refer to an \\\"LLM\\\" in their figure schematics, but the input is always an image, and the brain data they are using is visual. This could be a typo?\", \"The figure 2 caption did not do an adequate job explaining the complex methods schematic.\", \"Figure 3 shows that a greater proportion of voxels achieve p > 0.05 encoding, but that threshold appears to be quite low - the gains from LinBridge over a linear model seem to mostly involve the voxels with weakest signal. A positive interpretation is that the non-linear function somehow has a denoising effect, allowing us to predict some bits of cortex that were too noisy to be fit with a linear model. But, I am troubled by an alternative interpretation: what if these voxels with significant predictivity at the left tail of the blue distributions in Figure 3 arise because the nonlinearity is essentially just modeling noise structure in the data? With only 3 trials available, averaging will be insufficient to fully eliminate noise, and sources of signal and noise variability are possibly correlated in NSD and many other datasets. Is there any analysis that would convince the reader that these voxels with R2 < 0.1 but significant predictivity actually carry useful signal that can teach us something new about visual representation? On this topic, I feel it would be more interesting if the LinBridge approach conferred better predictivity in the right tail of the distribution, suggesting that even in high-SNR voxels, there are non-linear components that linear models cannot explain. But, this does not seem to be the case based on Figure 3. At minimum, the authors should provide a graph that quantifies the performance of linear vs non-linear mappings across a range of R2 thresholds spanning from 0.05 to 0.6 or so.\", \"While the results presented in Figure 5 seem to align with past work and suggest that higher-level voxels show more nonlinearity in their responses than earlier voxels, I wonder if this is more of a sanity check than a relevant scientific finding? Wouldn't the null hypothesis be exactly this, under the assumption that feedforward visual processing implements successive nonlinear transformations of the representations that begin in V1? The authors could do more to convey the importance of this finding - perhaps I am not fully understanding the implications.\"], \"questions\": \"I have described my questions and suggestions above in the Weaknesses section. Beyond merely validating that the assumptions of LinBridge hold and that the technique slightly raises R2 scores across some parts of cortex compared to linear mappings, I would strongly recommend that the authors perform additional analysis to convey that LinBridge can provide more rich and detailed forms of insight into visual representation, over and above a linear encoding model applied to the same dataset.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper seeks to interpret non-linear encoding models, showing that the \\\"non-linearity\\\" of the mapping CLIP-ViT representations to NSD responses in the brain increases with the brain regions' ranks in their hierarchy.\", \"disclaimer\": \"I didn't understand the central use of J_inherent. I could not understand what exactly the authors mean by it - a formal definition is absent; Hereforth, I'm intuiting that it is somehow a measure of linearness in the mapping.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Intepreting non-linear encoding models is critical because they do tend to outperform linear encoding models as the paper shows. This paper seems to take a step in that direction.\", \"weaknesses\": \"As I mentioned before, the most critical element J_inherent isn't formally defined - making it very hard for me to understand what's happening next (although I can follow the intuitions).\", \"questions\": \"1. As far as I can tell, the non-linear encoding model doesn't do much better than the linear one in terms of R^2. It does predict more voxels though. Is the fact that you see more \\\"non-linearness\\\" in the mapping in later brain regions of the hierarchy due to the fact that the linear model cannot predict many of those voxels? Put another way, could a simple comparison between the R^2 given by a linear model vs the non-linear model, voxel-by-voxel, reveal similar profiles i.e. linear model does worse in certain brain areas and therefore the mapping needs to be non-linear?\\n2. What does the mapping being non-linear entail? You are using CLIP-ViT to encode responses of voxels across a huge chunk of the brain. Is it expected that a linear map should be possible in the first place? The brain presumably does way more things than just encode visuo-semantics the way CLIP-ViT does. I would've predicted that the mapping would be more linear in later visual areas as they are more semantic - turns out that's not the case. What gives, according to you? Q1 probably gives hints - does the linear model do much worse in high-level visual cortex? If yes, then possibly you're onto something super interesting.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a LinBridge, a flexible framework to extract both the linear inherent component and the nonlinear mapping biases from nonlinear encoding models. However, the performance of LinBridge in terms of fMRI prediction and interpretable visual feature extraction should be fully validated. And the hierarchical nonlinearity within the visual cortex is not convincing.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Using Jacobian matrices (JM) to quantify the complex mapping relationships in non-linear encoding models is interesting. The JM_{inherent} and \\\\Delta JM is conceptually reasonable.\", \"weaknesses\": \"1. The LinBridge relies on an assumption that the nonlinear mapping between ANN representations and neural responses can be factorized into a linear inherent component that approximates the complex nonlinear relationship, and a mapping bias that captures sample-selective nonlinearity. This assumption is rather strong. Any evidence?\\n2. The computational cost of Jacobian is not reported. \\n3. Experiments are not rich. More model comparisons are needed.\\n4. This paper claims that the variability of nonlinearity across different levels of the visual processing hierarchy. However, the evidence of the distribution of hierarchical nonlinearity within the visual cortex is rather weak and highly dependent on the nonlinear embedding. The method should be fully validated before applying it to neuroscience discoveries. Moreover, evidence from previous neuroscience studies/experiments is needed to support this statement.\", \"questions\": \"See previous section.\", \"additional_questions\": \"1. Figure 1 does not effectively illustrate the sample-specific characteristics and structural instability of the nonlinear encoding models, despite the authors' claims.\\n2. The method is not clearly described. For example, LLM in Figure 2 has not been defined in the Method. Is it CLIP-ViT? CLIP-ViT is not LLM model, but multi-modal visual-language model.\\n3. Figure 3 compared LinBridge nonlinear encoder with linear encoder in terms of fMRI prediction R^2. It is a non-fair comparison since nonlinear model has better fitting capacity. I would suggest comparing LinBridge with another nonlinear model with contrastive learning, for example, CEBRA, as well as the model in [1].\\n4. What are the features corresponding to the linear responses? Would the features change between using the linear model and LinBridge model? \\n5. What are sample-selective nonlinear responses? Can you please visualize it? \\n\\nRef\\n[1] Wang, A. Y., Kay, K., Naselaris, T., Tarr, M. J., & Wehbe, L. (2023). Better models of human high-level visual cortex emerge from natural language supervision with a large and diverse dataset. Nature Machine Intelligence, 5(12), 1415-1426.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors presents a novel encoding model architecture designed to disentangle linear from nonlinear relationships between visual representations and neural responses. The authors demonstrate the framework's utility through its application to fMRI data an successfully reveal hierarchical patterns of nonlinearity in the cortical visual processing stream.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors present an overall very clearly written paper. The problem statement, the approach and the performed analysis are well described. The approach is promising, but is severely limited by weak evaluations.\", \"weaknesses\": [\"Despite the clear presentation and interesting ideas, the paper is severely limited by the weak evaluations and missing baselines. The model performances of both the linear and non-linear model are not only very similar, but seem to be fairly low overall, casting doubt on the validity of the approach.\", \"Considered as a whole, there needs to be a substantial revision that addresses these weaknesses, which would improve this paper significantly.\", \"*Major Concerns*:\", \"I'm not convinced of the validity of the overall approach. The authors claim that their approach allows disentangle the contributions of linear and nonlinear processing in the visual hierarchy. For this, they use a highly nonlinear feature extractor (CLIP-ViT) and then combine these features either linearly or non-linearly to predict responses in visual cortex. This method is severely limited in its explanatory power because it so crucially depends on the chosen non-linear feature extractor. A large model like a ViT very likely has an overcomplete basis, such that the differences in linearly or non-linearly combining these features will be very small. There is evidence for this in the nearly identical evaluation metrics between these two models.\", \"The model performance overall seems very week, by visualizing the voxel-wise predictive performance, it seems that many voxels can't be more accurately predicted as compared to chance level. On fMRI data of the visual cortex, relatively simple LNP-models usually perform better than chance level.\", \"Because the models are so similar, it is crucial to compare that against a baseline with a different architecture and a different training paradigm to distinguish the contrastive training from a simple regression-based neuronal response fitting.\", \"The approach could further be studied more comprehensively by validating it on toy-data, such as combinations of gabor-like V1 simple and complex cells and non-linear combinations of these. There, it would be expected that LinBridge is able to disentangle the linear from non-linear components.\", \"*Minor Concerns*:\", \"the authors write in the introduction: \\\"We apply LinBridge to a neural encoding exploration of vision transformer models and reveal the variability in nonlinearity across different levels of the visual processing hierarchy\\\". This does make it sound as if the approach was used to study intermediate representations of vision transformer models. However, such as analysis was not performed and is misleading.\"], \"questions\": [\"The description of the encoding model is not complete. Where the same intermediate features of a CLIP-ViT used to predict all voxels?\", \"in Fig. 2, the embedding extraction is denoted as \\\"LLM\\\" in the graphic. Is this supposed to represent the CLIP model?\", \"It is not clear why the contrastive training is needed. Could the authors provide further motivations why this strategy is meaningful here?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
C06kww3Qky
Fitting Networks with a Cancellation Trick
[ "Jiashun Jin", "Jingming Wang" ]
The degree-corrected block model (DCBM), latent space model (LSM), and $\beta$-model are all popular network models. We combine their modeling ideas and propose the logit-DCBM as a new model. Similar as the $\beta$-model and LSM, the logit-DCBM contains nonlinear factors, where fitting the parameters is a challenging open problem. We resolve this problem by introducing a cancellation trick. We also propose R-SCORE as a recursive community detection algorithm, where in each iteration, we first use the idea above to update our parameter estimation, and then use the results to remove the nonlinear factors in the logit-DCBM so the renormalized model approximately satisfies a low-rank model, just like the DCBM. Our numerical study suggests that R-SCORE significantly improves over existing spectral approaches in many cases. Also, theoretically, we show that the Hamming error rate of R-SCORE is faster than that of SCORE in a specific sparse region, and is at least as fast outside this region.
[ "Network analysis", "DCBM", "logit-DCBM", "community detection", "SCORE" ]
Accept (Poster)
https://openreview.net/pdf?id=C06kww3Qky
https://openreview.net/forum?id=C06kww3Qky
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tHvM0hs6Cc", "rS3oYrkVLt", "hFgsKAt8aA", "gQp4MUmDLt", "e176M1L5sp", "avy5g9U31q", "Qlk0TIU3ua", "DkTt6ObNiT", "B9veAvJzgk", "6YSUFhAJTn", "2g5GD5n9Ye", "0PFwOfAz4D" ], "note_type": [ "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1737524177645, 1732776342852, 1734635573875, 1732777230036, 1732778294161, 1732778842959, 1730631992444, 1730721577596, 1732774070031, 1732775572308, 1730497423407, 1730798115733 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12287/Authors" ], [ "ICLR.cc/2025/Conference/Submission12287/Area_Chair_grCu" ], [ "ICLR.cc/2025/Conference/Submission12287/Authors" ], [ "ICLR.cc/2025/Conference/Submission12287/Authors" ], [ "ICLR.cc/2025/Conference/Submission12287/Authors" ], [ "ICLR.cc/2025/Conference/Submission12287/Reviewer_urJ2" ], [ "ICLR.cc/2025/Conference/Submission12287/Reviewer_2iwW" ], [ "ICLR.cc/2025/Conference/Submission12287/Authors" ], [ "ICLR.cc/2025/Conference/Submission12287/Authors" ], [ "ICLR.cc/2025/Conference/Submission12287/Reviewer_CDCt" ], [ "ICLR.cc/2025/Conference/Submission12287/Reviewer_wiFe" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reply to Reviewer wiFe (2)\", \"comment\": \"**Response to Question 3**: Sorry but we do not have a Lemma 3 in our paper, but based on your description, we suppose you are referring to Lemma 2.2, which is about using cancellation trick to estimate $\\\\theta_i$.\\n\\nWhat we present in Section 2 is the so-called *oracle approach* in the literature, \\na frequently used idea for constructing new estimates. The main idea of the oracle approach is, we first find a way to reconstruct the parameters of interest in the *idealized noiseless case*, we then mimic the idea in the noiseless case and propose an approach for the real case. The oracle approach has been proven to be very successful in the literature, many important modern ideas, including classical PCA and the lasso (see \\\"Uncertainty Principles and Signal Recovery\\\" by Donoho (1989)), originated from the oracle approach. \\n\\nIn our setting, the parameters of interest are $\\\\theta_1, \\\\theta_2, \\\\ldots, \\\\theta_n$, and the model is \\n$$A = \\\\Omega - \\\\mathrm{diag}(\\\\Omega) +W, \\n$$\\nwhere $W$ is the noise matrix, and $\\\\mathrm{diag}(\\\\Omega)$ only has a secondary effect. Therefore, \\nto apply the oracle approach to our setting, \\n- we first use $\\\\Omega$ to construct $\\\\theta_1, \\\\ldots, \\\\theta_n$ (this is the idealized noiseless case). \\nThis is done in Lemma 2.2, where we show that for each odd number $m \\\\geq 3$, \\n$$\\n\\\\theta_{i_1}^2 = \\\\frac{\\\\sum_{i_2, \\\\ldots, i_m \\\\in S_{i_1} (dist)} \\n\\\\Omega_{i_1 i_2} (1 - \\\\Omega_{i_2 i_3}) \\\\ldots \\\\Omega_{i_{m-2} i_{m-1}} (1 - \\\\Omega_{i_{m-1} i_m}) \\\\Omega_{i_m i_1}\\n}{\\\\sum_{i_2, \\\\ldots, i_m \\\\in S_{i_1}(dist)} (1 - \\\\Omega_{i_1 i_2}) \\\\Omega_{i_2 i_3} \\\\ldots (1 - \\\\Omega_{i_{m-2} i_{m-1}}) \\\\Omega_{i_{m-1} i_m} (1 - \\\\Omega_{i_m i_1})}. \\n$$\\n- we then mimic the idea and replace $\\\\Omega$ by $A$ everywhere in the construction above (this gives rise to \\nan estimate for the real case). This is equation (10), where we set $m = 3$. For general $m$, the idea is similar, and for every $i_1$, we estimate $\\\\theta_i$ by\\n$$\\n\\\\hat\\\\theta_{i_1}^2 = \\\\frac{\\\\sum_{i_2, \\\\ldots, i_m \\\\in S_{i_1} (dist)} \\nA_{i_1 i_2} (1 - A_{i_2 i_3}) \\\\ldots A_{i_{m-2} i_{m-1}} (1 - A_{i_{m-1} i_m}) A_{i_m i_1}\\n}{\\\\sum_{i_2, \\\\ldots, i_m \\\\in S_{i_1}(dist)} (1 - A_{i_1 i_2}) A_{i_2 i_3} \\\\ldots (1 - A_{i_{m-2} i_{m-1}}) A_{i_{m-1} i_m} (1 - A_{i_m i_1})}. \\n$$\\nThis is nothing but replacing all $\\\\Omega$ by $A$ in equation (9) of Lemma 2.2. \\n\\n- we then justify the derived estimates $\\\\hat{\\\\theta}_{1}, \\\\ldots, \\\\hat{\\\\theta}_n$ \\nare consistent for $\\\\theta_1, \\\\ldots, \\\\theta_n$ rigorously. \\nFor space reasons, we present this in Lemma C.1-C.2 of the supplement. \\nThe idea why this works is similar to that of central limit theorem, though the proof is \\nmuch more complicated. \\n\\nThe above work for each fixed odd number $m \\\\geq 3$. But since the algorithm with $m = 3$ is already first-order optimal, there is little reason to use a larger $m$ (for a larger $m$, we have similar numerical results, but the \\nproof is much longer). \\n\\n***\\n**Response to Question 4**: Yes, the cancellation trick is applicable to much broader settings. \\nIn our setting, Lemma 2.2 holds because for all $i, j$ in the same community, \\n$$\\n\\\\frac{\\\\Omega_{ij}}{(1 - \\\\Omega_{ij})} = \\\\theta_i \\\\theta_j \\\\pi_i' P \\\\pi_j = \\\\theta_i \\\\theta_j. \\n$$\\nWe have a similar result if there is a positive function $g$ and $h$ such that for all $i, j$ in the same community, \\n$$\\n\\\\frac{g(\\\\Omega_{ij})}{h(\\\\Omega_{ij})} = \\\\theta_i \\\\theta_j \\\\pi_i' P \\\\pi_j = \\\\theta_i \\\\theta_j. \\n$$\\nIn fact, by basic algebra, it is seen that for any odd number $m \\\\geq 3$, \\n$$\\n\\\\theta_{i_1}^2 = \\\\frac{\\\\sum_{i_2, \\\\ldots, i_m \\\\in S_{i_1} (dist)} \\ng(\\\\Omega_{i_1 i_2}) h(\\\\Omega_{i_2 i_3}) \\\\ldots g(\\\\Omega_{i_{m-2} i_{m-1}}) h(\\\\Omega_{i_{m-1} i_m}) g(\\\\Omega_{i_m i_1}) \\n}{\\\\sum_{i_2, \\\\ldots, i_m \\\\in S_{i_1}(dist)} h(\\\\Omega_{i_1 i_2}) g(\\\\Omega_{i_2 i_3}) \\\\ldots h(\\\\Omega_{i_{m-2} i_{m-1}}) g(\\\\Omega_{i_{m-1} i_m}) h(\\\\Omega_{i_m i_1})}. \\n$$\\nFor revision, we have added a new remark, Remark 3, in Section 2. See details therein.\\n\\n***\\n**Response to Typos**: Thank you and we have fixed them.\"}", "{\"metareview\": \"This paper introduces the logit-DCBM model for random graphs which extends several previous models. The paper then designs R-SCORE to estimate the parameters of the model, which addresses nonlinearities in the model by a cancellation trick, detailed in Lemma 2.2. Empirically R-SCORE seems to outperform existing spectral clustering approaches. Overall the reviewers find the results to be interesting, however there are some concerns about presentation (which was partly addressed in author response) and the significance of the work especially for the ICLR community.\", \"additional_comments_on_reviewer_discussion\": \"Authors clarified the scope and motivation of the setting and addressed some confusions.\"}", "{\"title\": \"Reply to Reviewer 2iwW\", \"comment\": \"We thank the reviewer for a great summary and we are very glad that the reviewer thinks our study depicts an optimistic view of the problems. Below we provide some clarifications on the weaknesses and address the question.\\n***\\n**Accessibility**: We thank you for a great comment. Community detection is probably the most studied problem in network analysis, and we have revised the paper to make it more accessible to such readers.\\n\\n***\\n**Table of notations**: We do have a subsection called \\\"Content and notation\\\" in the end of Section 1. The table covers all notations we think that worthy of a clarification.\\n***\\n**Response to Question**: Thanks for raising this great question, we mention two points. \\nFirst, in the simpler SBM case, all $\\\\theta_i$ are the same, so the logit-SBM model and the SBM are equivalent. This is because the nonlinear matrix $N$ satisfies \\n$$\\nN_{ij} = \\\\frac{e^{\\\\theta_i + \\\\theta_j}}{1 + e^{\\\\theta_i + \\\\theta_j}} = c_0, \\\\qquad \\\\mbox{for a constant $c_0$, if all $\\\\theta_i$ are the same}. \\n$$\\nTherefore, in this special case, the logit-DCBM reduces to an SBM (which is a special DCBM). Second, for DCBM, many algorithms have exponential error rates. However, for general logit-DCBM, due to the nonlinear factors, the optimal rates remain unknown, but it is likely that we only have polynomial rate. For example, suppose all $\\\\theta_i$ are at the same order. Then the error rate of SCORE is \\n$$\\n\\\\leq C e^{-C n \\\\bar{\\\\theta}}, \\n$$\\nwhich is exponential (e.g., Jin, Ke, Luo (2021)). For the logit-DCBM, \\nwe only be able to show that that the error rate of a procedure is \\n$$\\n\\\\leq C (n \\\\bar{\\\\theta})^{-\\\\alpha}, \\\\qquad \\\\mbox{for some $\\\\alpha > 0$}, \\n$$\\nwhich is polynomial.\"}", "{\"title\": \"Reply to Reviewer urJ2\", \"comment\": \"Thanks for the great summary and comments. We are happy to address the concerns.\\n***\\n**The scope of the paper and relevance to ICLR community**: Thanks for the comment. First, we wish to point out that \\\"proposing logit-DCBM\\\" is only one of our contributions. We also have several other ones, including a novel cancellation trick, a new algorithm R-SCORE, and new theoretical results. Especially, the cancellation trick is a new approach for removing nonlinearity in latent space models and can be generalized to other nonlinear links (please see our response to Reviewer 2iwW). An important message we convey in this paper is: MLE (which can only be solved by nonconvex optimization) is not the only option for attacking nonlinear problems. There exists a novel cancellation trick that removes nonlinearity and allows us to apply those power tools designed for linear settings (such as spectral approaches). We believe this is a significant contribution.\\n\\nEven this particular motivation of \\\"studying logit-DCBM\\\" is important by itself. Network modeling is an important problem in the machine learning community, and what is the best model for real networks remains an open problem. The DCBM, LSM, and \\n$\\\\beta$-model are all popular models, and many works about these models have been published in machine learning conferences. Here are a few examples:\\n\\n- Neural Latent Space Model for Dynamic Networks and Temporal Knowledge Graphs. By Tony Gracious et al., AAAI 2021\\n- Learning Guarantees for Graph Convolutional Networks in the Stochastic Block Model. By Wei Lu, ICLR 2022\\n\\nOur logit-DCBM is a broader model that extends the above models. Therefore, our work is an important step forward in network modeling. \\n\\nSecond, our work is **closely related to machine learning**. In machine learning, we have many nonlinear latent variable models \\nspreading in many areas, including cancer clustering, empirical finance, text analysis, and network analysis. Due to the nonlinearity, \\nhow to analyze such models is a challenging problem. In this paper, we propose an interesting cancellation trick using which \\nwe can effectively remove the nonlinear factor in some latent variable models.\\n\\nEspecially, we showcase this cancellation trick with a network community detection setting. The logit-DCBM is a nonlinear model, where how to do community detection is a challenging problem: existing approaches are both computationally inefficient or hard to analyze. \\nWe show that, by removing the nonlinear factor in a logit-DCBM (a nonlinear network model), we can convert it approximately to a DCBM. We can then effectively apply (say) SCORE (a recent spectral approach) for community detection.\\n\\nFor space reasons, we only showcase this trick with a network setting, but the idea is extendable to other nonlinear latent space models. For this reason, our work may spark new research in many different directions in machine learning.\\n\\n***\\n**Hard to follow with many acronyms**: Thanks, but we believe our way for presenting the acronym is conventional: we present the full name when the acronym appears the first time, with references. For example, the full name of DCBM is presented in the first line of the abstract, and again in Line 36 of the introduction with a reference. The full name of LSM is explained in Line 51 with a reference. The logit-DCBM is the direct combination of logit and DCBM, so the is no need to present the full name. For your comments on EM algorithm: in most published works, people only cite it as the \\\"EM algorithm\\\", without explaining what EM stands for. The EM algorithm is a textbook algorithm (first proposed by Dempster et al (1977)) for estimating latent variable models. \\n***\\n**Response to Question on $|\\\\lambda_{\\\\min}(P)|$**: In the area of network community detection, it is conventional to assume $\\\\mathrm{rank}(\\\\Omega) = K$ (in the DCBM case) or $\\\\mathrm{rank}(\\\\widetilde{\\\\Omega}) = K$ (the logit-DCBM or LSM case). Since $\\\\mathrm{rank}(\\\\widetilde{\\\\Omega}) = K$ in our case, we must have $\\\\mathrm{rank}(P) = K$ by basic algebra. Since $P$ is $K \\\\times K$, $P$ is non-singular, and so $|\\\\lambda_{min}(P)| > 0$. \\n***\\n**Response to Question on $C$**: In this paper, $C$ stands for a generic constant which may vary from one occasion to another (such a use of notation is conventional in the literature). We have clarified this in the end of Section 1 (Content and notation). \\n***\\n**Response to Question on simulations**: Thanks and this is a great comment. It seems that the most appropriate MLE algorithm we can compare is the non-convex penalized MLE-based approach (npMLE) by Ma, Ma and Yuan 2020. The paper deals with the latent space model (LSM) and is probably the closest related work to our paper. We have added new two experiments (Experiment 2-3) in Section 4. The experiments suggest \\n\\n- The R-SCORE runs much faster than the npMLE. \\n- In many settings, R-SCORE has a lower error rates than the npMLE.\"}", "{\"title\": \"Reply to Reviewer CDCt\", \"comment\": \"Thanks for a great summary. We are very glad that the reviewer find our theoretical results are well presented and support the\\nmerit of our method. Below, we first provide clarifications on the weaknesses and then respond to the questions in detail.\\n***\\n**Presentation style**: Thanks, but we have explained the theoretical results on Page 3-4, with explicit rates for the Hamming errors (see Line 161). It is unconventional to present a theorem in the introduction, especially for short papers like this one.\\n\\n***\\n**Stopping criteria and computational complexity**: The stopping criteria was mentioned in the first paragraph of Section 3 (Line 253-256). The computation cost is specified in the paragraph right above Remark 4. \\n\\n***\\n**Numerical experiment**: Thanks and we have fixed this and revised the text. We have also added two new experiments comparing R-SCORE with the non-convex penalization MLE-type algorithm (npMLE) by Ma, Ma, and Yuan (2020), which we refer it to as the npMLE. See Section 4.\\n\\n***\\n**Response to question on numerical studies**: Thanks for the question. Regardless of the data settings, R-SCORE \\nalways converges very quickly ($\\\\leq 5$ steps). For example, in the experiment reported last time, R-SCORE converges in $1$ step. In the newly added experiment, R-SCORE converges in $3$ steps. The key idea of R-SCORE is to use a cancellation trick to estimate \\nthe nonlinear factors, and so to convert the logit-DCBM to a DCBM. By design, such an algorithm usually converges quickly, for \\nthe estimate for each parameter in the nonlinear factor quickly become flat when we iterate. This is different from penalization approaches (e.g., Ma, Ma, Yuan (2020), which is also an MLE approach), where we need hundreds of iteration. See Figure 3 (left) in the supplement for how fast two algorithms converge when we iterate. \\n\\nNote that a quick convergence is really not a disadvantage. In the revision, we compare R-SCORE with the MLE approach by Ma, Ma, Yuan (2020). In many settings, R-SCORE is not only much faster, but also has a smaller error rate in many settings. See for example Figure 3 (right).\"}", "{\"summary\": \"The paper introduces the logit-DCBM as an extension of popular network models DCBM, LSM, and $\\\\beta$-model. The authors propose a \\\"cancellation trick\\\" to tackle the nonlinearity challenges of the logit-DCBM, which allows the model to retain a form of low-rank approximation similar to DCBM. They also present an algorithm, R-SCORE, for community detection in networks based on the logit-DCBM. R-SCORE iteratively updates the model's parameter estimates, removing nonlinear factors and approximating a low-rank model for clustering. The paper shows that R-SCORE achieves a faster Hamming error rate in certain sparse regions than standard SCORE.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The logit-DCMB model provides a nice combination of the DCMB, LSM, and $\\\\beta$ models.\\n\\nThe \\\"cancellation trick\\\" allows disregarding of nonlinear terms by means of a ratio of two large sums used as estimators. This is an interesting idea that can be applied to various other settings.\", \"weaknesses\": \"The scope of the paper is [quote] \\\"to propose a nonlinear version of DCMB so that it will hopefully be more acceptable\\\" and extend SCORE to it. I find this goal somewhat incremental and only marginally related to the ICLR community.\\n\\nThe paper is hard to follow, with many acronyms that are not spelt out and sentences that are ambiguous. An example: line 085, \\\"The logit-DCBM (and all other models mentioned above) are so-called latent variable models, where \\u03a0 is the matrix of latent variables. For latent variable models, the EM algorithm (e.g., Dempster et al. (1977)) is a well-known approach.\\\" What is EM? \\\"approach\\\" to do what?\", \"questions\": \"In eq (16), how can you guarantee |\\\\lambda_{min}|>0?\\n\\nIn Thm 3.1, Lemma 3.1, Thm 3.2, Corollary 3.1: What is C? Are the C used the same across the three results?\\n\\nIn the simulations, can't you compare with additional baselines based on e.g. a MLE?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes R-SCORE (Recursive-SCORE), a generalization of the SCORE spectral algorithm to estimate the parameters and communities in a newly introduced \\\"nonlinear\\\" extension of the DCBM model [3] where a logit function is added to the probability. The R-SCORE algorithm alternatingly optimizes the estimates of the core matrix $\\\\Pi$ and the offset terms $\\\\theta$ with the SCORE algorithm and a custom refitting step respectively. Theoretical results are shown both for the application of the original SCORE algorithm to the proposed extension logit-DCBM and to the application of the newly proposed R-DCBM algorithm\\n\\n\\n\\n\\n\\n***References***\\n\\n[1] Jin J. 2015 Annals of Stats. \\\"Fast community detection by SCORE\\\"\\n[2] Jiashun Jin, Zheng Tracy Ke, Shengming Luo, and Minzhe Wang. \\\"Optimal estimation of the number of network communities\\\". Journal of the American Statistical association, 2023. \\n\\n[3] Karrer, B. and Newman, M. E. J. (2011). Stochastic blockmodels and community structure in networks\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. Well-written\\n2. Appears rigorous and professional\\n3. A generalization of the DCBM model to the logit case is an interesting direction\", \"weaknesses\": \"Accessibility to readers less familiar with the literature on community detection\\n\\nNo table of notations\", \"questions\": \"Could you compare your rates with the classic rates for the simpler SBM model of Abbe et al. when the thetas are equal?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Comments to All\", \"comment\": \"We thank all reviewers for their valuable time and thoughtful comments. The reviewers have two main points or recommendations: (1) to clarify the relevance to machine learning, and (2) to add a numerical comparison with the MLE approach.\\n\\n***\\n**For the first main point**: (a) Our paper introduces a cancellation trick by which we can conveniently reduce a nonlinear latent variable setting to a linear latent variable setting, (b) network analysis is an important area in machine learning, with increasingly more interest, and how to model and analyze social networks remains a largely open problem. \\n\\nFor (a): Compared with classical multivariate statistical analysis approaches, a major advantage of machine learning approaches lies in modern ideas for tackling nonlinearity. However, for unsupervised learning (e.g., clustering and fitting latent variable models), \\nhow to tackle nonlinearity is a very challenging problems. Most existing machine learning approaches use non-convex optimization, with substantial efforts devoted to speed up algorithms, but still, many of non-convex optimization approaches remain slow computationally. \\n\\nIn this paper, we discover a simple cancellation trick that is able to remove nonlinearity effectively and so allows the users to take advantage of existing powerful methods for linear problems (e.g., PCA) and thus significantly reduce computational costs. This points out a new possible solution for tackling nonlinearity. \\n\\nSince nonlinear latent variable models exist in many machine learning problems and have broad applications (e.g., in cancer clustering, empirical finance, text analysis, and network analysis), we believe our paper is very relevant to the machine learning community. \\n\\nEspecially, the logit-DCBM is a nonlinear latent variable model, where how to do community detection is a challenging\", \"problem\": \"existing approaches are either computationally inefficient, or hard to analyze, or both. We show that, by removing the nonlinear factor in the logit-DCBM (a nonlinear network model), we can convert it approximately to a DCBM. We can then effectively apply (say) SCORE (a recent spectral approach) for community detection. For networks with a few thousand nodes, this can speed up the computation by 50 times compared to using MLE.\\n\\n\\nFor space reasons, we only showcase this trick with a network setting, but the idea is extendable to other nonlinear latent space models. For this reason, our work may spark new research in many different directions in machine learning. In the revised paper, we have added a paragraph in the end of Section 5, reflecting the points above. See details therein.\\n\\nFor (b): Network analysis is an important area in machine learning. Many papers on network modeling and analysis haven been accepted at top machine learning conferences. For instance, \\n\\n1) Effects of Graph Convolutions in Multi-layer Networks. By Aseem Baranwal et al., ICLR 2023\\n\\n2) Semi-supervised Community Detection via Structural Similarity Metrics. By Yicong Jiang and Tracy Ke, ICLR 2023\\n\\n3) Random Sparse Lifts: Construction, Analysis and Convergence of finite sparse networks. By David Robin et al., ICLR 2024\\n\\nThus, we believe that the network setting presented in this paper is of interest to the machine learning community.\\n \\n***\\n**For the second main point**, we have added two numerical experiments (Experiment 2 and 3 in Section 4) where we compare \\nR-SCORE with the approach by Ma, Ma, and Yuan (2020). The latter is essentially a non-convex penalized MLE (npMLE) approach for fitting latent space model, which is the work that is most closely related to our work. We compare the error rates of R-SCORE with npMLE under different settings, and find that (a) R-SCORE is computationally much faster, and (b) in many settings, the error rate of \\nR-SCORE is lower than that of npMLE.\"}", "{\"title\": \"Reply to Reviewer wiFe (1)\", \"comment\": \"Thanks for the summary and great comments. We are happy to address the concerns.\\n***\\n**Response to Weaknesses**: We wish to re-iterate the motivation of our work, which is three-fold: \\n\\n* We introduce a cancellation trick, which can be useful for many nonlinear latent-variable models in statistics and machine learning. For reasons of space, we only use the logit-DCBM to showcase the trick, but the trick may be useful in much broader settings. We have explained this in ``Comments to all\\\" above; see details therein. \\n\\n* If we use DCBM, then we must impose a large number of constraints as in equation (3). This is not desirable, especially if we want to fit the model using a penalization approach. If we use the logit-DCBM, then we do not need such constraints. \\n\\n* The logit-DCBM is an extension of the DCBM, LSM, and $\\\\beta$-model, each of which is popular in network analysis. Using the logit-DCBM allows us to have a unified model, and thus helps reduce repetitive and overlap research in this area (and so save research time and efforts in the research community of network analysis). \\n\\nWe believe we have explained the motivations carefully in the introduction, but we agree with you that more references could help. In this revision, we have added textbooks and popular packages there as reference, in all of which the logistic link is the default choice for handling binary data. We also pointed out which place of Hastie et al (2009) recommends using logistic regressions. \\n\\n***\\n**Response to Question 1**: The open problem is all of these: \\\"how to analyze MLE\\\", \\\"how to come up with an estimate\\\" and \\\"how to come up with an estimate that satisfies certain property\\\" (e.g., consistency). In classical statistical settings, the MLE is usually consistent, and we can analyze the MLE with a standard approach (e.g., Lehmann and Casella, 2006, textbook). \\nFor our setting, standard analysis for MLE does not work, and there is no consistency results for the MLE. It is therefore unclear (a) how to develop a new method, (b) what methods are consistent. We have explained these in Line 118-120 carefully: \\\"How to estimate N in the $\\\\beta$-model is a well-known open problem, as explained in the survey paper (Goldenberg et al., 2010) (see also Rinaldo et al. (2010)): \\\"A major problem with the $p_1$ and related models, recognized by Holland and Leinhardt, is the lack of standard asymptotics, ..., we have no consistency in results for the maximum likelihood estimates\\u201d (the $p_1$ model is a special case or ours). \\n \\nOur major contribution here is that, using a cancellation trick we discover, we overcome the challenge by proposing an easy-to-use estimate and show that they are consistent, and lead to improved community detection results in the down-stream.\\n\\n***\\n**Response to Question 2**: The estimates are consistent with rigorous proofs. In fact, under our settings, $\\\\mathbb{P}(\\\\widehat{\\\\Pi} \\\\neq \\\\Pi) = o(1)$, and the estimates for $P$ and $\\\\theta_1, \\\\ldots, \\\\theta_n$ are all consistent. Since that our ultimate goal is community detection (where we use such estimates in a middle step), so for reasons of space, we defer both the statements and proofs for consistency to the supplement: see Section C.2 and C.3 or Lemma C.1 and C.2 for details. Second, R-SCORE is the combination of SCORE and these estimates. Our main results in Section 3.1 shows that R-SCORE improves SCORE on the error rate of community detection. This implies that these estimates behave as expected.\\n***\"}", "{\"summary\": \"The paper proposes logit-DCBM (Degree Corrected Block Model) for community detection in graphs. Authors propose a method to overcome the presence of non-linear terms in the estimation problem. Authors refer to their method as the \\\"cancelation trick\\\". They propose an iterative method dubbed R-score (R for Recursive) which recursively applies the cancellation trick to the estimate. Authors provide theoretical results that guarantee the estimate is close to the true value.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Literature is well reviewed and contributions are well positioned with respect to existing art. Authors introduce their contributions with thorough explanations that are explained in easy-to-read language, despite the large amount of notation and math involved. Theoretical results are well presented and seem to indicate the merit of the method.\", \"weaknesses\": \"Presentation of this rather theoretical paper could benefit from a more formal statement of results with a Theorem which goes earlier in the paper and the body of the paper being devoted to explaining the intuition and implications of the result.\\n\\nI do not find a clear indication of stopping criteria in the paper, neither did I find any discussion on computational complexity and scalability of the method. \\n\\nNumerical experiments are not well presented. At the very least figures could have x and y labels.\", \"questions\": \"Numerical experiments do not show any benefit in iterating: after a single iteration the method seems to have converged. Can you explain this? was the test data too easy? did you try running your model on more challenging problems?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors first propose the logit-DCBM model as a nonlinear variant of the degree-corrected block model (DCBM), which essentially replaces the log link function with the logit link function. Replacing the link from log brings some nice properties to the model, such as removing some constraints on latent parameters. However, at the cost of introducing non-linearity and the parameter estimation becomes difficult.\\n\\nThe main technique is the cancellation trick, used to estimate nonlinear terms in the logit-DCBM, which simplifies parameter estimation and the authors claim that this addresses an open problem in the $\\\\beta$-model parameter estimation (another one community variant which is the special case of their logit-DCBM). Additionally, the paper introduces R-SCORE for community detection, a recursive adaptation of the SCORE algorithm for the logit-DCBM. Finally, they derive upper bounds on the classification error rates for SCORE and R-SCORE for their proposed model logit-DCBM.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"See summary.\", \"weaknesses\": \"Overall, I am not too sure why the logit-DCBM model is worth studying. The authors mention some motivation in lines 58-60 and lines 69-71, but I did not find it compelling enough. I found the motivation argued hand-heavily without any concrete and interesting reasons.\\nLine 58, \\\",to many statisticians, (5) is preferred.\\\": Which statisticians? References?\", \"line_59_60\": \"I don't think the logistic regression is the only \\\"recommended\\\" model by text bool for binary data. Also, what does \\\"recommending\\\" even mean? Where does Hastie et al. 2009 recommend it?\\n\\nSee also questions. Questions 1 and 2 in particular, where I further raised my concern.\\n\\nIt is perhaps interesting to study logit-DCBM, but it is not clear to me right now based on what is written on the paper, and that's why I am currently leaning towards a borderline reject.\", \"questions\": \"1. I could not exactly make sense of the open question in the references given e.g. (Goldenberg et al., 2010). What is the open problem precisely? To analyze MLE? Or just to come up with an estimate? Or to come up with an estimate that satisfies certain properties? What is the precise technical open question?\\n2. What properties does your estimate have? As far as I can think, it is not unbiased, or is it? Is it consistent? What can you say further? For me, right now, it is just some estimate.\\n3. Ultimately, I really want to get a better understanding of Lemma 3 and how good this estimate is. I understood the statement and algebra used therein. But is there something more interesting? What is the effect of $m$ on the quality of the estimate? Does it not change anything in terms of quality? Or even if it does, we are looking at the statistics for $m=3$ for computational efficiency? Also, is there an intuitive way of thinking about what that statistics is capturing in (10) and its equivalent for higher $m$? And why I can expect it to be a reasonable estimate? Essentially, trying to get intuition on why cancellation happens. \\n4. Does the cancellation trick also apply to other link functions beyond logit? In either yes or no, could you please walk me through why for some standard link functions (e.g. probit or other relevant ones)?\", \"typos\": \"-In Lemma 2.1 Proof, is there an extra summation over $i$ again in $(I)$?\\n-In Line (222): \\\",the sum is over..\\\",the index $i_2$ is repeated twice.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BzvVaj78Jv
Students Rather Than Experts: A New AI for Education Pipeline to Model More Human-like and Personalised Early Adolescences
[ "Yiping Ma", "Shiyu Hu", "Xuchen Li", "Yipei Wang", "Shiqing Liu", "Kang Hao Cheong" ]
The capabilities of large language models (LLMs) have been applied in expert systems across various domains, providing new opportunities for AI in Education (AI4Education). Educational interactions involve a cyclical exchange between teachers and students. Current research predominantly focuses on using LLMs to simulate teachers, leveraging their expertise to enhance student learning outcomes. However, the simulation of students, which could improve teachers' instructional skills, has received insufficient attention due to the challenges of modeling and evaluating virtual students. This research poses the question: “Can LLMs be utilized to develop virtual student agents that mimic human-like behavior and individual variability?” Unlike expert systems focusing on knowledge delivery, virtual students must replicate learning difficulties, emotional responses, and linguistic uncertainties. These traits present significant challenges in both modeling and evaluation. To address these issues, this study focuses on language learning as a context for modeling virtual student agents. We propose a novel AI4Education framework, termed SOE (Scene - Object - Evaluation), to systematically construct LVSA (LLM-based Virtual Student Agents). By curating a dataset of personalized teacher-student interactions with various personality traits, question types, and learning stages, and fine-tuning LLMs using LoRA, we conduct multi-dimensional evaluation experiments that integrate both subjective human evaluations and objective metrics. Specifically, we: (1) develop a theoretical framework for generating LVSA; (2) integrate human subjective evaluation metrics into GPT-4 assessments, demonstrating a strong correlation between human evaluators and GPT-4 in judging LVSA authenticity; and (3) validate that LLMs can generate human-like, personalized virtual student agents in educational contexts, laying a foundation for future applications in pre-service teacher training and multi-agent simulation environments.
[ "AI for Education; Large Language Models; LLM-based Agent; Teacher Training" ]
Reject
https://openreview.net/pdf?id=BzvVaj78Jv
https://openreview.net/forum?id=BzvVaj78Jv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zLeGFzj60h", "yYDSEhuKW5", "w5QoTFUqRD", "vwNVGfgvwz", "tGwVOeesC0", "shISvHUxLl", "sbIrz3hRnD", "r68cW80Gcy", "qUuXZhsIAq", "q3lIrexq3h", "oDKt9zC83u", "mDKgCZwbHi", "khv1y6gzv5", "k11uMovqPX", "hwvzl5SC79", "hBbYbI3sRB", "fYu2GoGwyY", "dSy5oaEJJ2", "dB0hN4DMe3", "c2E5HLlj7a", "btSPZlAgUY", "bsr6wFD5pK", "aklY0cU3B9", "X9xUicGOos", "VUdzhorCdO", "V5rZhTRDUB", "T8fuIyE7Bx", "SgOF9lg7nX", "SSZsqPlRkw", "SFXpyfrFIy", "RPSBzSAyxc", "R2sX8r8C9O", "ORvgEPZD27", "NFE3MMDASQ", "LOzvZGSiHl", "KW6jcUbMby", "I224F5jT09", "Gx7Ye1zy9I", "GDFxOKDa9P", "FyIp6h4NhY", "BAW7nvztAI", "AZGGW9PFyG", "9fnaWgG2aP", "74X0mE5xCr", "6zvGjI2Zy7", "6tPsig9dX9", "6qWe4RWNGl", "6SgzFPATaW", "662UuKVzzv", "5Ms2GOeudD", "5CoWx5d30y", "2XLnQ9VvcJ", "2AgiWuqRp8", "0mi7Z275nH" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730050158981, 1732554532601, 1732548802968, 1732549747292, 1732870929986, 1732562428154, 1734830579652, 1732561986880, 1732560985778, 1732870293851, 1732648285572, 1733158364703, 1732561300287, 1732559589887, 1732550101446, 1732562253178, 1732550622351, 1732869525514, 1732656879425, 1732655605714, 1730658794718, 1732551534876, 1732707134996, 1732560244107, 1732869921742, 1732655766776, 1732870601485, 1732549026149, 1729715423525, 1730689955087, 1732561778730, 1732553761679, 1732557110786, 1732560415432, 1732869084605, 1732549292769, 1732867904234, 1732554136979, 1732560739236, 1732559346651, 1732742818127, 1732551905052, 1737523555855, 1732558446456, 1732559939971, 1732557809393, 1732656119806, 1732871114577, 1732868774011, 1732558941275, 1732552499758, 1732871397616, 1732556809575, 1732553033550 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_3QkY" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Area_Chair_pg1R" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_wxDr" ], [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_3QkY" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_3QkY" ], [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_3QkY" ], [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_rtkz" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_3QkY" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_yWBj" ], [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_wxDr" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_3QkY" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Reviewer_3QkY" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ], [ "ICLR.cc/2025/Conference/Submission3119/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This work presents an AI4Education framework, termed SOE (Scene - Object - Evaluation), to construct LVSA (LLM-based Virtual Student Agents). It contributes a new Chinese education dataset and evaluates four existing LLMs which are fine-tuned in this dataset in simulating virtual students with specific traits. A Turing test is used to evaluate whether LLMs can generate such human-like virtual students. Specifically, GPT-4 is used for large-scale automatic evaluation and a small group of real humans are recruited for small-scale evaluation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This work presents a LLMs-based agent evaluation pipeline including data preparation, LLM fine-tuning, and virtual student evaluation.\\n\\n2. A new large dataset is contributed to study the virtual student agents mimicked by LLMs.\\n\\n3. A comprehensive experiment is conducted to evaluate the performance of student agents powered by LLMs.\\n\\n4. This paper presents sufficient supplemental materials for the readers to understand the prompts.\\n\\n5. Overall, it is easy to read and understand this paper, with clear arguments.\", \"weaknesses\": \"W1. One of my main concerns is the novelty in model/framework design. The proposed SOE framework looks more like a dataset processing and LLM testing pipeline. However, this is no specific model design or new prompt structure design. The authors simply instruct the LLMs to mimic students with different traits based on the big five rationale. The LLMs used from existing work are directly fine-tuned on a new dataset. There are no unique insights in prompt design or fine-tuning strategies, compared with previous work. Moreover, the incorporation of big five theory for agent personality is also widely utilized in existing work such as [1]. Therefore, this is not a novelty nor contribution.\\n\\nW2. Another main concern is that I think this work cannot really support its claim in line 532 \\\"resembled real student behavior\\\" and line 533 \\u201csimulating realistic student roles\\u201d. I agree with line 053 \\u201cvirtual student agents must replicate a broader range of human behaviors\\u201d. But such replication needs labels/ground truth to demonstrate and quantify the accuracy of such replication. A simple Turing test cannot really support that, and lots of existing work has explored LLMs in passing Turing test [2] and a new Turing experiment [3]. That said, Turing test can show that your agent simulation is human-like, but that does not necessarily mean that your agent simulation is realistic among students\\u2019 real behaviors, which needs more convincing ground truth for comparison. But this work does not have a real ground truth for comparison. \\n\\nSpecifically, I also do not understand why the five kinds of agents (HN, HA, LO, HE, LC) are combined with real students as the six one together for evaluation in appendix figure A18. Do the authors mean that human evaluators are asked to distinguish one of the six types? I\\u2019m confused because at first the authors say that this work aims to simulate student traits. So I thought real students serve as the labels instead of an additional type in addition to the five traits. Such Turing test setting can only show that humans cannot distinguish whether the expressions are from agents or real students. This is not surprising considering existing work showing the performance of LLMs in the Turing test such as [2]. However, it cannot show that such simulation is realistic because there is no ground truth for comparison. \\n\\n\\nW3. I\\u2019m also concerned regarding the limited generalization and evaluation. This work only uses one Chinese dataset from the authors. No public datasets or English datasets are used. It is uncertain whether the performance of this framework can have good generalization.\\n\\nW4. Moreover, another important concern is the lack of a real baseline. The evaluation simply compares pre-finetuning with post-finetuning. It is not surprising that finetuning in the targeted dataset can result in better performance. But there is not a real baseline to show the unique advantage of the propose SOE framework, compared with existing approaches.\\n\\nW5. Furthermore, using GPT-4 for automatic evaluation is not convincing. It looks like the authors are using one LLM to evaluate other LLMs\\u2019 performance. No ground truth is used for more convincing evaluation. Line 373 mentioned the high inter-coder reliability just proves that the two students have consistent opinions in annotation. But it cannot prove that GPT 4 is a convincing automatic evaluation tool to replace real human annotators to perform automatic evaluation.\\n\\nW6. In Appendix D.5.3 and figure A18, since GPT4 can \\u201ceffectively simulate and assess nuanced student behaviors\\u201d (from line 4236), why not directly use GPT4 for student trait simulation? Why do you use other LLMs to perform such simulation and use GPT 4 to serve as the evaluator? Furthermore, in line 4244, \\u201cGPT-4 and human evaluators was found to be 0.6806, which falls within the range of substantial agreement\\u201d. Why 0.6806 is acceptable and why 0.6806 is within the range of substantial agreement? Who defines the agreement? Is there a standard rule for the value range?\\n\\nW7. I\\u2019m really confused about what the evaluation score means in section 5.4. What the score refers to? What does 100% mean in line 416? Does it refer to the accuracy for human evaluators or GPT 4 to distinguish different types of agent traits? Similarly, for different learning stages in line 463 and different question types in line 482, what does the percent % mean? Does it refer to the accuracy or any metrics, to measure what?\\n\\nW8. Line 478: \\u201cThese findings suggest that LVSA effectively adapts to different learning stages, offering comprehensive support for pre-service teacher training. Virtual students enable teachers to practice and refine instructional strategies across all teaching phases, enhancing skill development throughout the entire instructional process.\\u201d These assertions are over-claimed. If you want to show its effectiveness to help teachers, then you need to conduct a real teaching experiment to show it, which is more convincing. Otherwise, such assertions sound weak.\\n\\nW9. Line 334: No ethic information (such as IRB approval) about recruiting human participants for evaluation and releasing collected datasets to the public. Potential ethic concerns.\\n\\nW10. 115 samples are too limited for human evaluation. The number of questions used for human evaluation in Appendix Figure A11 and table A3 is too limited. It is also unclear why you only use 40 samples or 35 samples per setting, which is quite confusing. Why all five fatigue test items come from the real student responses? Why not use 2 items from each group of fine-tuned models, direct inference, and real students?\\n\\nW11. For section 5.2 human Turing test, simply showing the average results of all human evaluators is not a standard nor convincing approach. A formal statistical analysis such as ANOVA is needed.\", \"reference\": \"[1]. CGMI: Configurable General Multi-Agent Interaction Framework, Shi et al. 2023\\n\\n[2]. Does GPT-4 pass the Turing test? Jones et al. 2023\\n\\n[3]. Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies, ICML 2023\", \"questions\": \"All questions are listed in the weakness above.\", \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)', 'Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"No ethic information (such as IRB approval) about recruiting human participants for evaluation and releasing collected datasets to the public. Potential ethic concerns.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer rtkz's concern about adaptability of the pipeline\", \"comment\": \"***W5&Q5: How adaptable is the SOE pipeline to different classroom contexts, age groups, or cultural backgrounds? Could it be easily fine-tuned for students in different educational systems or with varying cultural characteristics?***\\n\\n**Response**: We sincerely thank the reviewer's kind suggestion to discuss how the SOE pipeline can adapt to different classroom contexts, age groups, and cultural backgrounds. In the discussion section, we have further elaborated on the adaptability of the SOE pipeline across multiple disciplines, age groups, and cultural contexts. **As mentioned in common concern 1, the SOE framework proposed in this study is both general and modular.** It is designed to flexibly accommodate the needs of various disciplines, providing a clear and actionable research pathway for multidisciplinary applications. \\n\\nBelow is our detailed response to this issue.\\n\\n**1. Design and Adaptability of the SOE Pipeline**\\nThe core goal of the SOE pipeline is to provide a highly generalizable and extensible framework to address the challenges of modeling and evaluating virtual students in educational contexts. Its modular design allows for flexible adaptation to various educational systems and teaching needs. **This structure not only supports adjustments tailored to specific disciplinary requirements but also optimizes for the characteristics of students across different age groups and cultural backgrounds.**\\n- **Scene Module**: Enables the flexible definition of subject contexts based on different classroom scenarios and teaching objectives. For example, it validates and ensures that base models have the requisite knowledge and reasoning capabilities for mathematics, thus constructing appropriate virtual students.\\n- **Object Module**: Generates personalized virtual students through data fine-tuning, capable of simulating student behaviors across various age groups and cultural contexts. For instance, it supports designing simplified language interactions for younger students and adding complex reasoning tasks or open discussions for older students by constructing fine-tuning instruction datasets.\\n- **Evaluation Module**: Supports the adjustment of evaluation standards based on the scenario to ensure that the evaluation results align with the objectives of specific educational contexts.\\n\\n**2. Validation of SOE Pipeline Adaptability**\\nLanguage studies, with their openness and diversity, provide an ideal environment for simulating interaction needs across different personality traits and subject teaching content. **This makes them well-suited for validating the framework\\u2019s generalizability across various teaching stages.** In this study, we have preliminarily validated the applicability of the SOE pipeline in specific language-based scenarios, laying a solid foundation for expanding the SOE pipeline into other disciplines, cultures, and age groups.\\n- **Adaptability Across Teaching Stages**: Experiments demonstrated that virtual students can adapt to multiple teaching stages (e.g., lesson introduction, knowledge instruction, and review) and exhibit behavior characteristics aligned with the goals of these scenarios.\\n- **Adaptability to Different Question Types**: Virtual students can accommodate both open-ended questions (e.g., problem-solving and creative thinking) and closed-ended questions (e.g., fixed-answer knowledge responses), addressing diverse cognitive needs.\\n\\n**3. Next Steps for Exploring Applications in Other Disciplines**\\n- **Cultural and Age Adaptability**: By adjusting training datasets to include examples from different cultural backgrounds, educational practices, and language styles, LVSA can enhance its applicability in multicultural and multilingual educational contexts. For example, in multilingual education, training data can include scenarios of bilingual or multilingual teaching interactions.\\n- **Adaptation to Diverse Educational Systems**: To meet the educational objectives and teaching methodologies of different countries and regions, datasets covering a wide range of disciplines and teaching tasks will be designed.\\n- **Refined Fine-Tuning and Data Augmentation**: By leveraging data augmentation techniques to expand the coverage of small-sample datasets and applying fine-tuning to improve the model\\u2019s adaptability to new tasks, LVSA can better align with specific educational contexts.\\n\\nWe are deeply grateful for the suggestion, which has greatly advanced our understanding of the adaptability and universality of the SOE pipeline. Moving forward, we will continue to expand and deepen this research direction to ensure that LVSA can deliver maximum value across global educational systems.\"}", "{\"title\": \"Common Concern 1\\uff1aGeneralizability of the SOE pipeline\", \"comment\": \"We sincerely thank the reviewers for their concerns regarding the generalization and adaptability of SOE pipeline. In the discussion section, we have further elaborated on the adaptability of the SOE pipeline across multiple disciplines, age groups, and cultural contexts. The SOE framework proposed in this study is both general and modular. It is designed to flexibly accommodate the needs of various disciplines, providing a clear and actionable research pathway for multidisciplinary applications.\\n\\nBelow is our detailed response to this issue.\\n\\n**1. Design and Adaptability of the SOE Pipeline**\\n\\nThe core goal of the SOE pipeline is to provide a highly generalizable and extensible framework to address the challenges of modeling and evaluating virtual students in educational contexts. Its modular design allows for flexible adaptation to various educational systems and teaching needs. This structure not only supports adjustments tailored to specific disciplinary requirements but also optimizes for the characteristics of students across different age groups and cultural backgrounds.\\n\\n- **Scene Module**: Enables the flexible definition of subject contexts based on different classroom scenarios and teaching objectives. For example, it validates and ensures that base models have the requisite knowledge and reasoning capabilities for mathematics, thus constructing appropriate virtual students.\\n- **Object Module**: Generates personalized virtual students through data fine-tuning, capable of simulating student behaviors across various age groups and cultural contexts. For instance, it supports designing simplified language interactions for younger students and adding complex reasoning tasks or open discussions for older students by constructing fine-tuning instruction datasets.\\n- **Evaluation Module**: Supports the adjustment of evaluation standards based on the scenario to ensure that the evaluation results align with the objectives of specific educational contexts.\\n\\n**2. Validation of SOE Pipeline Adaptability:**\\n\\nLanguage studies, with their openness and diversity, provide an ideal environment for simulating interaction needs across different personality traits and subject teaching content. This makes them well-suited for validating the framework\\u2019s generalizability across various teaching stages. In this study, we have preliminarily validated the applicability of the SOE pipeline in specific language-based scenarios, laying a solid foundation for expanding the SOE pipeline into other disciplines, cultures, and age groups.\\n\\n- **Adaptability Across Teaching Stages**: Experiments demonstrated that virtual students can adapt to multiple teaching stages (e.g., lesson introduction, knowledge instruction, and review) and exhibit behavior characteristics aligned with the goals of these scenarios.\\n- **Adaptability to Different Question Types**: Virtual students can accommodate both open-ended questions (e.g., problem-solving and creative thinking) and closed-ended questions (e.g., fixed-answer knowledge responses), addressing diverse cognitive needs.\\n\\n**3. Next Steps for Exploring Applications in Other Disciplines:**\\n\\n- **Cultural and Age Adaptability**: By adjusting training datasets to include examples from different cultural backgrounds, educational practices, and language styles, LVSA can enhance its applicability in multicultural and multilingual educational contexts. For example, in multilingual education, training data can include scenarios of bilingual or multilingual teaching interactions.\\n- **Adaptation to Diverse Educational Systems**: To meet the educational objectives and teaching methodologies of different countries and regions, datasets covering a wide range of disciplines and teaching tasks will be designed.\\n- **Refined Fine-Tuning and Data Augmentation**: By leveraging data augmentation techniques to expand the coverage of small-sample datasets and applying fine-tuning to improve the model\\u2019s adaptability to new tasks, LVSA can better align with specific educational contexts.\\n\\nWe are deeply grateful for reviewers' valuable suggestions, which have greatly advanced our understanding of the adaptability and universality of the SOE pipeline. Moving forward, we will continue to expand and deepen this research direction to ensure that LVSA can deliver maximum value across global educational systems.\"}", "{\"title\": \"Rebuttal Summary\", \"comment\": [\"We would like to express our heartfelt gratitude to all reviewers and the Area Chair (AC) for their thorough evaluation and thoughtful feedback on this study, which have greatly enhanced the academic value of our work and encouraged critical reflection. The reviewers have highlighted the following key contributions of our paper:\", \"**Research contributions align with ICLR's focus**: This study aligns with ICLR\\u2019s interest in innovative AI applications in education and social simulation, showcasing potential to inspire future research (Reviewer rtkz).\", \"**Originality and innovation**: By shifting from teacher-centered AI to virtual student simulations, this study pioneers a new direction in AI4Education and introduces a novel approach to personalized student modeling (Reviewers rtkz, yWBj).\", \"**Significant educational value**: The application of virtual students in pre-service teacher training addresses the lack of student interaction in traditional training, enhancing teachers' readiness, adaptability, and effectiveness (Reviewers wxDr, rtkz, yWBj).\", \"**Comprehensive and rigorous research methodology**: The SOE (Scenario-Object-Evaluation) pipeline integrates psychological theories and AI fine-tuning to systematically tackle virtual student modeling and evaluation with scientific rigor (Reviewers rtkz, 3QkY, yWBj).\", \"**Robust experimental validation**: A large-scale dataset and comprehensive experiments validate virtual student performance, with supplementary materials providing deeper insights (Reviewer 3QkY).\", \"**Clear and comprehensible writing**: The visualization of the SOE framework and structured experimental design make the research accessible and easy to understand (Reviewers wxDr, rtkz, 3QkY).\", \"**Distinct interdisciplinary characteristics**: This study integrates education, psychology, and AI, adding depth to AI applications in education (Reviewers rtkz, yWBj).\", \"Based on the reviewers' invaluable suggestions, we have made extensive revisions to the paper, with all changes highlighted in blue. The key revisions are as follows:\", \"**1. Expanded Experiments and Evaluations**\", \"**Objective Evaluation Experiments:** To complement subjective evaluation, four objective metrics\\u2014Text Token, Perplexity, TTR, and Sentiment Analysis\\u2014were introduced and added in Sec 5.5 and Appendix D.6, along with visualizations of the objective evaluation results (fig. 7, fig. A25-A28). These experiments collectively validate the personalization and human-like attributes of LVSA modeling.\", \"**ANOVA Experiments:** One-way and two-way ANOVA analyses were added to Sec 5.2, Appendix D.4.2, Table A7, and fig. A22. The results confirm significant statistical differences before and after fine-tuning, with no significant differences between virtual students and real students in human evaluations.\", \"**Fine-Grained Analysis of Virtual Student Responses Across Teaching Stages:** Specific examples of responses from five types of virtual students across different teaching stages and question types were added to Sec 4.3 and Appendix D.2. Using content analysis, these responses were examined in depth to reveal how they differentiate and reflect the unique personality traits of virtual students.\", \"**2. Supplemented Details in the Manuscript**\", \"**Refined Abstract and Introduction:** Added description of objective evaluation: \\\"we conduct multi-dimensional evaluation experiments that integrate both subjective human evaluations and objective metrics.\\\"\", \"**Refined Discussion on LC Personality:** In Sec 5.6 and Appendix D7.1, further detailed analyses of the LC personality are provided.\", \"**Adjustments to the Discussion Section:** In Sec 6, descriptions of the SOE framework\\u2019s generalization capability have been added, along with future directions.\", \"**Supplemented Description of Evaluation Metrics:** In Sec 5, a clarification was added: \\u201cThe metrics are based on the probability of virtual responses being judged by humans or GPT-4 as resembling real student responses.\\u201d\", \"**Clarification of Real Students\\u2019 Role in the Turing Test:** In Sec 5.2, it was emphasized that \\u201cInstead of using a fixed ground truth, real students served as a control group to assess the human-like characteristics and role-playing effectiveness of LVSA in educational interactions.\\u201d\", \"**Clarification of Experimental Conclusions and Their Boundaries:** In Sec 5.4 and Sec 6, it was clarified that the current results are based on controlled experimental settings and human feedback.\", \"**3. Introduction of Ethical Protection**\", \"An ethical protection and data privacy statement was added in Appendix D.8.\", \"We sincerely thank the reviewers for their valuable suggestions, which greatly improved our paper\\u2019s quality and impact. We believe the revised manuscript is now more rigorous and compelling. Below, we address three common concerns first, followed by point-by-point responses to specific comments.\"]}", "{\"title\": \"Response to Reviewer 3QkY's Concern About Baseline Comparison\", \"comment\": \"We sincerely thank the reviewer for the valuable feedback. We understand the concern regarding the comparison with baseline models and would like to further clarify our considerations regarding the experimental design.\\n\\nWe would like to emphasize that **although our study does not directly employ a relevant baseline model, we have made efforts to approximate existing work on virtual student modeling based on prompt engineering during the pre-fine-tuning phase.** Since this aspect was not explicitly addressed in the manuscript, **we will add a clarification in the article stating, \\\"The 'pre' refers to a baseline comparison with existing prompt-engineering-based virtual student modeling research, where prompt engineering was used.\\\"** In response to the reviewer's concerns about baseline comparison, we provide the following detailed explanation:\\n\\n**1. Existing Work Primarily Uses Large Language Models with Prompt Engineering, Not Fine-Tuning**\\n\\nRegarding baseline models in related work, we have made efforts to find relevant benchmarks for our method. However, most existing large language model-based prompt-engineering approaches (such as those involving GPT-3 or GPT-4 prompts) do not involve fine-tuning [13-15]. The SOE framework we propose is based on fine-tuning for virtual student modeling. Therefore, the existing baseline approaches cannot be directly compared to our method, as they primarily rely on prompt engineering without incorporating detailed model fine-tuning. **In our experiment, we used prompt engineering to perform preliminary experiments, enabling a comparison with the proposed SOE pipeline.** However, this process was not fully detailed in the manuscript. **Following the reviewer\\u2019s suggestion, we will include a more comprehensive explanation in the final version of the paper, describing how prompt engineering was used for model guidance before fine-tuning and how it compares to the existing pipeline.**\\n\\n**2. The Uniqueness of the Research Results in the Lack of Suitable Baseline Comparisons**\\n\\nCurrent educational field benchmark models lack comprehensive modeling of student behavior complexity (e.g., personality differences, cognitive levels, etc.). Since the SOE framework focuses on multi-dimensional virtual student modeling (e.g., personality traits, cognitive levels) and integrated educational assessment, existing baseline models are not fully applicable in this regard. Furthermore, these models do not propose detailed pipeline processes or directly open-source their work, making it impossible for us to compare with existing baseline models.\\n\\nThe core innovation of the SOE framework lies in providing an operational, scalable research framework for virtual student modeling and educational assessment, integrating interdisciplinary theories to support personalized modeling and evaluation across various educational scenarios. Therefore, **the contribution of this study is more focused on presenting a new research methodology and approach, rather than purely optimizing algorithms or comparing with baseline models.**\\n\\nWe again appreciate the reviewer's thorough review of our work, and we hope that our responses above address your concerns.\\n\\n[13] Zhang, Z., Zhang-Li, D., Yu, J., Gong, L., Zhou, J., Liu, Z., ... & Li, J. (2024). Simulating classroom education with llm-empowered agents. arXiv preprint arXiv:2406.19226.\\n\\n[14] Lee, U., Lee, S., Koh, J., Jeong, Y., Jung, H., Byun, G., ... & Kim, H. Generative Agent for Teacher Training: Designing Educational Problem-Solving Simulations with Large Language Model-based Agents for Pre-Service Teachers.\\n\\n[15] Markel, J. M., Opferman, S. G., Landay, J. A., & Piech, C. (2023, July). Gpteach: Interactive ta training with gpt-based students. In Proceedings of the tenth acm conference on learning@ scale (pp. 226-236).\"}", "{\"title\": \"Response to Reviewer yWBj's concern about mitigating hallucinations\", \"comment\": \"***Q2: What specific strategies can be employed to minimize hallucinations in virtual student responses? Would incorporating additional real-world classroom data or using more advanced fine-tuning methods help in this regard?***\\n\\n**Response**: Thanks for raising the issue of mitigating hallucinations in the content generated by LVSA. Hallucination is a common and critical challenge in applying LLMs. We have implemented various strategies to minimize such issues and plan to further enhance these measures in future work, which has also been added into **Sec 6**. \\n\\nBelow, we provide a detailed description of our current strategies and future improvement directions.\\n\\n**1. Current Strategies in This Study**\\n\\nIn this research, we employed the following strategies to mitigate hallucination issues:\\n- **Explicit Personality Trait Prompt Design** (see Appendix B.2.2): We emphasized incorporating detailed descriptions of personality traits in the prompts. This helps guide the model to generate content consistent with the specified personality traits.\\n- **Contextualized Content Generation for Teaching Scenarios** (see Appendix C.2.2): By linking generation tasks with specific teaching contexts, we directed the model to produce relevant and practical teaching content, thereby reducing irrelevant or misaligned hallucination content.\\n- **Post-Processing and Verification of Generated Content** (see Appendix C.1.2): All model outputs were verified to ensure consistency and logical coherence by expert revision. This step ensures that the generated content adheres to teaching standards and accurately reflects realistic student behaviors.\\n\\nWhile these methods have reduced hallucinations to a certain extent, we acknowledge that discrepancies between generated content and actual classroom contexts may still occur.\\n\\n**2. Insights and Improvements for Future Work**\\n\\nBased on your suggestions, we believe the following directions can further mitigate hallucination issues:\\n- **Incorporating Real Classroom Data**: We plan to collaborate with schools to collect high-quality teacher-student interaction data from actual classroom scenarios. This will enhance the diversity and authenticity of the data, providing stronger constraints on the model's outputs.\\n- **Optimizing Personalized Modeling Mechanisms**: We aim to explore and implement more precise personalization modeling techniques, such as reinforcement learning or multi-objective fine-tuning, to introduce stronger personality constraints and ensure the model accurately simulates behaviors of various complex personalities.\\n- **Strengthening Monitoring and Correction of Hallucinated Content**: We plan to develop specialized mechanisms for monitoring and correcting hallucinated content. For instance, natural language processing techniques can be employed to detect and rectify inconsistencies or illogical content. Dedicated detection and correction mechanisms for hallucinations will further enhance the reliability and coherence of generated content.\\n\\nWe place great importance on addressing hallucination issues and will continue exploring and implementing effective solutions in this area. We believe that these measures will significantly improve the quality of virtual student models and their effectiveness in educational applications. Thanks again for the valuable suggestions, which will help us better tackle this challenge in future research.\"}", "{\"metareview\": \"This paper aims to model LLM-based Virtual Student Agents (LVSA) that mimic the behaviors and personalities of early adolescent students in educational settings. Reviewers recognized that this paper studies a quite interesting problem and the new framework is well motivated. However, reviewers also raised some major concerns on contributions, technical details, model justifications, evaluations, etc. Although some of the concerns have been addressed by the authors during the rebuttal and discussion stage, reviewers still found that some issues cannot be easily addressed by a minor revision, such as novelty and generalization ability. Overall, the current version of this work is not ready for publication at ICLR.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised some major concerns about novelty and contributions, technical details, model justifications, evaluations, etc. The authors have provided very detailed responses during the rebuttal stage, which have addressed some of the concerns. However, some major issues still remain, such as novelty and generalization ability.\"}", "{\"title\": \"Response to Reviewer yWBj's concern about the use of real-world data in the SOE pipeline\", \"comment\": \"***W1: While the framework is well-developed, the fine-tuning process relies on datasets sourced from controlled environments, which may not fully capture the complexity of real-world classroom dynamics.***\\n\\n**Response**: Thanks for recognizing the SOE pipeline and for the valuable suggestions regarding the incorporation of real-world data. In our current study, we did consider using more authentic classroom data. **However, real-world classrooms often rely heavily on traditional teaching methods, where student responses are limited and lack diversity.** This makes it difficult to meet the requirements for balanced sampling. Additionally, in the context of pre-service teacher training, LVSA need to exhibit distinct personality traits to help teachers adapt to diverse student needs. The available real-world data is therefore highly limited, which led us to develop and validate the model using data generated in controlled environments.\\n\\n**It is worth mentioning that after reading your comments, we have been inspired to engage in deeper discussions as a team**, particularly regarding how real-world data could be integrated into future research. We plan to take the following steps in subsequent studies:\\n\\n- **Selecting Representative Personality Traits**: Based on five key personality traits, we aim to choose representative students from various disciplines, cultural backgrounds, and age groups.\\n- **Collecting High-Quality Teacher-Student Interaction Data**: We plan to invite students and teachers to participate in one-on-one teaching sessions and collect interaction data during these sessions. These natural teaching interactions will be recorded and annotated to provide more realistic data for virtual student modeling.\\n- **Enhancing Data Diversity and Coverage**: Gradually expanding data sources to include more classroom formats and educational scenarios, thereby further improving the practicality and generalizability of the SOE framework.\\n\\nWe understand that translating research conducted under laboratory conditions into technology applicable to real-world education poses many challenges. The reviewer's feedback has provided us with valuable direction, which will guide us in better integrating and leveraging real-world data in future studies. Thanks again for the valuable suggestions. We look forward to translating these insights into tangible results in subsequent research.\"}", "{\"title\": \"Response to Reviewer 3QkY's concern about ethical issues\", \"comment\": \"***Q9: No ethic information (such as IRB approval) about recruiting human participants for evaluation and releasing collected datasets to the public.***\\n\\n**Response**: Thanks for the reviewer's concerns regarding the ethical aspects of this study, including the recruitment of human participants for evaluation and the release of collected datasets to the public. Ensuring compliance with ethical standards is a fundamental part of our research, especially when involving human participants. **We have added this content to the revised manuscript (**see Appendix D.8**) to provide readers with a clearer understanding of the study's ethical compliance and safety measures.**\\n\\nBelow are the detailed clarifications regarding ethical review and data handling.\\n\\n**1. This Study Falls Under the Scope of Ethical Exemption**\\n\\nThe study was designed in accordance with ethical guidelines for non-interventional research. **Since participants were not required to undergo any interventions that might pose risks, all data collection and processing activities avoided sensitive personal information and implemented anonymization measures.** Thus, this research qualifies for ethical exemption under the ethical review regulations of our institution.\\n\\n**2. Data Handling and Protection Measures**\\n\\nDuring the design and implementation of the study, the following measures were taken to ensure participants\\u2019 data privacy and rights were fully protected:\\n- **Anonymization**: All experimental data were anonymized immediately after collection, ensuring that the data could not be linked to participants\\u2019 personal characteristics or private information, thus completely eliminating the risk of data leakage.\\n- **Risk-Free Experimental Design**: All experiments were conducted under low-risk, non-interventional conditions, ensuring that participants were not subjected to any psychological or physical risks.\\n- **Informed Consent**: All participants were fully informed about the purpose of the study, the intended use of the data, and their rights prior to participation, ensuring voluntary involvement.\\n- **Adherence to Standard Data Collection Procedures**: Data collection followed academic standards to ensure scientific rigor and compliance.\\n\\n**3. Use and Public Release of Datasets**\\nThe main data used in this study include human evaluation data and generated data, with appropriate handling to address potential ethical concerns:\\n- **Human Evaluation Data**: This data includes evaluators' opinions and, while anonymized, is not made publicly available to prevent any potential risk of information leakage. It is used solely for internal analysis to support the research hypotheses.\\n- **Generated Data**: All virtual student responses were generated by GPT-4 and do not involve any real personal data. Therefore, the public release of this data does not raise ethical concerns and can be used for academic exchange and further research.\\n\\nThrough these measures, we have ensured the ethical compliance of the study and fully protected participants' privacy. We value the reviewer's feedback and will continue to ensure that all research activities adhere to the highest ethical standards.\"}", "{\"title\": \"Response to Reviewer 3QkY's Concern About the Pipeline Novelty and Generalization (Part VI: Generalization of the SOE pipeline regarding multiple datasets)\", \"comment\": \"***Reviewer\\u2019s Concern on Demonstrating Generalization Using Multiple Datasets***\\n\\nThe reviewer mentioned that \\\"to really demonstrate the generalization ability, it is always more convincing to run a real experiment on additional datasets, instead of simply putting it into future work.\\u201d\\n\\nWe understand the reviewer\\u2019s suggestion to further validate the pipeline\\u2019s generalization using multiple datasets. **Indeed, we initially sought publicly available datasets. However, we found that datasets with teacher-student dialogues suitable for Big Five personality modeling are extremely rare**. Given our study\\u2019s need for fine-grained personalized modeling, including personality modeling, cognitive levels, and problem-type designs, we discovered that very few existing datasets fully meet these needs. Furthermore, many educational datasets are subject to ethical protection protocols, and some are restricted to internal research use only, preventing us from using them for validation.However, **our research does not rely solely on the diversity of datasets to demonstrate generalization. Instead, it focuses on a data-driven framework and methodology with cross-context, cross-disciplinary adaptability.**\\n\\nMoreover, we want to emphasize that cross-dataset generalization testing is more concerned with evaluating the language transferability of large language models, a capability that has already been extensively validated in the literature [11-12]. We believe **the core challenge in our work lies in overcoming the hierarchical understanding capabilities of virtual students in different scenarios. This hierarchical understanding requires virtual students to progress from superficial comprehension to deeper contextual understanding, ultimately interpreting the emotional themes conveyed in texts\\u2014an understanding that relates closely to their personality traits.** In our custom-built dataset, we have already conducted systematic experimental validations centered around this core need, demonstrating the hierarchical understanding abilities of virtual students in text comprehension tasks and across various teaching stages. For example, students with high extraversion and agreeableness exhibit more prominent comprehension abilities, while students with low conscientiousness and openness show mixed responses (see Appendix D.2.2). **We have demonstrated that the SOE pipeline can support the expression of personalized, hierarchical understanding abilities in our dataset.** For datasets in different languages, the language transferability of large language models can better adapt and optimize these abilities.\\n\\nThrough the above points, we hope to clarify the reviewer's concerns about the dataset and generalization capabilities. **We emphasize that the SOE pipeline's objective is not to innovate AI algorithm models but to serve as an interdisciplinary, multi-scenario educational research tool.** It provides scientifically grounded, actionable solutions across various educational stages, question types, and subject backgrounds. The diversity of datasets is not the core focus of our research; **what truly matters is the framework\\u2019s adaptability and flexibility and whether it can effectively support personalized modeling and evaluation tasks for virtual students.**\\n\\nWe again thank the reviewer for their valuable feedback. We will continue to refine and strengthen the framework\\u2019s generalizability and applicability. Should there be any further questions, we welcome continued discussion.\\n\\n[11] Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., ... & Fiedel, N. (2023). Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240), 1-113.\\n\\n[12] Ahuja, K., Diddee, H., Hada, R., Ochieng, M., Ramesh, K., Jain, P., ... & Sitaram, S. (2023). Mega: Multilingual evaluation of generative ai. arXiv preprint arXiv:2303.12528.\"}", "{\"comment\": \"Thank you for your detailed responses.\\nI appreciate the efforts you've made to address the concerns I raised, especially regarding evaluation biases and ethical considerations. I hence change my score from 3 to 5. \\n\\nBest of luck!\"}", "{\"comment\": \"I appreciate the authors' efforts in explanation. I will maintain my adjusted score. Thanks!\"}", "{\"title\": \"Response to Reviewer 3QkY's concern about sample size design and fatigue testing\", \"comment\": \"***Q10: The number of questions used for human evaluation in Appendix Figure A11 and table A3 is too limited. Why all five fatigue test items come from the real student responses?***\\n\\n**Response**: Thanks for the reviewer's detailed attention to the sample size design and fatigue testing approach in this study. The decisions regarding sample size and fatigue detection were made to balance the cognitive load of human evaluators and the stability of evaluation groups, in alignment with norms in social science research. \\n\\nBelow are our detailed responses.\\n\\n**1. Explanation of the Sample Size Design**\\n- **Balancing Sample Size and Evaluation Quality:**\\nWhile the limited sample size might influence the experimental results to some extent, ensuring the quality of evaluations was our primary objective. The following factors were specifically considered:\\n - **Cognitive Load and Evaluator Fatigue**: Our sample size design accounted for the potential cognitive burden and fatigue that evaluators might experience during long evaluation sessions [13]. Human evaluation tasks require multiple evaluators to rate each sample in detail and compare outputs from different models (e.g., fine-tuned models, non-fine-tuned models, and real student responses). This high-intensity task can lead to fatigue effects. A smaller sample size helps maintain evaluators\\u2019 focus and improves evaluation accuracy and consistency.\\n - **Time Management**: To ensure the efficiency and comfort of the evaluation process, we limited the time for each evaluation session (no more than one hour per round) and the number of samples per session. The group sizes (40, 35, and 40 samples) were chosen to balance rapid task completion with sufficient data representativeness and comprehensiveness.\\n - **Consistency With Research Norms**: Our sample size aligns with similar studies in the evaluation of generative models. For instance, the EvalLM system uses small-scale samples to reduce the cognitive burden on human participants while incorporating multi-faceted evaluations, including 40 samples for subjective ratings. Additionally, Fleiss' s Kappa is used to validate consistency among evaluators in this paper [14]. Although our sample size is limited, we ensured the reliability of the results through multi-evaluator ratings and consistency analyses (e.g., Fleiss' s Kappa).\\n\\n**2. Justification for Fatigue Testing Sample Selection**\\n\\nIn the fatigue test, we selected five responses from real students as test samples based on the following considerations:\\n- **Stability of the Baseline**: Choosing responses from real students as test samples provides a natural and standard baseline, enabling evaluators to assess under conditions that closely mimic actual teaching scenarios.\\n- **Focus on Fatigue Effects**: The core objective of fatigue testing is to observe the stability of evaluators' judgments over prolonged evaluation sessions, not to compare the performance of different models. By consistently using real student responses, we can more accurately analyze the impact of fatigue on evaluators' efficiency and accuracy.\\n\\nThis approach ensures the scientific validity and practicality of the evaluation process while balancing evaluators' workload and maintaining overall efficiency. We believe this design meets the needs of the study and aligns with ethical and operational standards in educational research. Thanks again for the reviewer's detailed review. In future research, we will continue exploring ways to optimize sample size and testing settings to further enhance the applicability and scientific rigor of our work.\\n\\n[13] Galesic, M., & Bosnjak, M. (2009). Effects of questionnaire length on participation and indicators of response quality in a web survey. Public opinion quarterly, 73(2), 349-360.\\n\\n[14] Kim, T. S., Lee, Y., Shin, J., Kim, Y. H., & Kim, J. (2024, May). Evallm: Interactive evaluation of large language model prompts on user-defined criteria. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-21).\"}", "{\"title\": \"Response to Reviewer 3QkY's concern about the lack of real baseline for comparison\", \"comment\": \"***Q4: There is not a real baseline to show the unique advantage of the proposed SOE framework, compared with existing approaches.***\\n\\n**Response**: We appreciate the reviewer's interest in baseline comparisons. We understand the concerns raised by the reviewer regarding the lack of a \\u201ctrue baseline\\u201d in the current assessment. **Due to the innovative and specialized nature of our research, there are currently no directly comparable existing works or widely accepted baselines that can provide a comprehensive comparison.** Our study aims to develop a novel SOE (Scene-Object-Evaluation) pipeline specifically designed for virtual student modeling, filling a gap in this field and offering a robust baseline for future research. \\n\\nBelow, we clarify the logic behind the design of this study and explain why the current evaluation method is reasonable and scientifically sound.\\n\\n**The primary goal of this research is to propose a systematic SOE pipeline to address the challenges in virtual student modeling and evaluation, rather than optimizing the performance of a single algorithm.** The core contribution of the SOE pipeline lies in providing a generalizable framework that includes Scene (context setting), Object (student modeling), and Evaluation (multi-dimensional assessment) components. Through this framework, we systematically explore LVSA behaviors across different teaching stages, personality traits, and question types, offering a new paradigm for virtual student research. **Importantly, the SOE pipeline integrates pedagogical evaluation logic with AI research, accommodating both educational and technical needs.** This research paradigm is unprecedented in the existing literature and holds significant originality and academic value. This unique contribution has also been acknowledged in Reviewer rtkz's comments. **While adopting publicly available baselines is common in traditional AI research, the lack of comparable frameworks or general datasets for virtual student modeling currently limits broader comparisons.**\\n\\nTo address this limitation, we used non-fine-tuned large language models as a baseline for comparison. This approach is widely accepted in the field [4]. **The performance of non-fine-tuned models not only reflects the improvement achieved through fine-tuning but also validates the modeling difficulty of the target task and the effectiveness of the fine-tuning method.** Comparing performance pre and post fine-tuning is a key step in verifying the pipeline's validity and refining its functionality, rather than simply comparing algorithm performance. Using non-fine-tuned models as a baseline aligns with current academic standards and is a reasonable choice. **It is worth emphasizing that the outputs generated by non-fine-tuned large language models are not random but are produced using carefully designed instructions via prompt engineering.** This approach allows us to preliminarily demonstrate the improvements achieved by the SOE pipeline\\u2019s fine-tuning strategy in educational scenarios.\\n\\n**The SOE pipeline\\u2019s strength lies in its modularity and extensibility rather than in optimizing a single performance metric.** To better highlight the distinctiveness and contribution of the SOE pipeline, our experiments employed multi-dimensional evaluation methods to verify its support for virtual student modeling throughout the teaching process. During this process, we focused on addressing the challenge of evaluating subjective responses from virtual students, rather than emphasizing model performance on existing benchmark tasks.\\n\\n[4] Zhang, R., Han, J., Liu, C., Gao, P., Zhou, A., Hu, X., ... & Qiao, Y. (2023). Llama-adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199.\"}", "{\"title\": \"Response to Reviewer wxDr's concern about evaluation bias and additional evaluation metrics\", \"comment\": \"***W1&Q3&Q4\\uff1aWhat steps are being taken to address biases that might occur in the evaluations? Did the authors explore additional evaluation metrics?***\\n\\n**Response:** We sincerely thank the reviewer for the valuable feedback on evaluation methods. Recognizing the importance of addressing bias and ensuring diversity, we implemented several measures: integrating subjective and automated evaluations (Sec 5), constructing a multidimensional framework (Sec 3), and selecting professional evaluators (Appendix D.3). Based on your suggestions, we conducted supplementary objective experiments, which validated the subjective evaluation results and highlighted the limitations of objective metrics. These findings further support our study's core focus on the \\\"challenge of evaluating virtual students.\\\"\\n\\nBelow are our specific responses.\\n\\n**1. On the Bias Issues of Subjective Evaluation and Automated Evaluation (GPT-4)**\\n- **Scientific Rigor and Reasonableness**: We combined subjective evaluation with GPT-4-based automated assessment, a widely adopted framework in LLM research [1]. Liu et al. (2023) validated GPT-4 for NLG task evaluations, noting low consistency between objective metrics and human assessments, especially for open-ended tasks [2]. Li (2023) highlighted the need for multiple metrics in GPT-4 prompt engineering [3], and Fleiss\\u2019s Kappa is commonly used to ensure evaluation consistency [4-5]. To further reduce bias, we optimized the process as follows:\\n - **Encoding of Evaluation Dimensions**: Semi-structured interviews and ATLAS.ti analysis extracted scientifically grounded evaluation dimensions, which were integrated into GPT-4 prompts to minimize bias in automated evaluation.\\n - **Consistency Verification**: Fleiss's Kappa was employed to verify the consistency of evaluators' results across the same dimensions, ensuring the reliability of subjective evaluations.\\n - **Diversified Theoretical Framework**: Our framework spans teaching stages and question types to avoid stereotyping personality traits. For example, highly extraverted individuals showed varied language styles across closed- and open-ended questions, ensuring diverse personality representation. Reviewers yWBj and rtkz highlighted its scientific and practical value as a key contribution of this study.\\n\\n- **Addressing Cultural Bias**: The Big Five, a widely accepted cross-cultural personality model, was chosen to minimize cultural biases. While current experiments focus on a Chinese teaching environment, the SOE framework is highly generalizable to multicultural contexts, including Western language teaching. Future studies will involve evaluators from diverse cultural backgrounds to further validate its applicability.\\n\\n- **Transparent Evaluation Process**: During the subjective evaluation phase, we used the \\\"think-aloud protocol\\\" to document evaluators' decision-making processes, ensuring that evaluators minimized personal bias during analysis.\\n\\n**2. On the Issue of Evaluator Diversity**\\n\\n- **Priority on Expertise over Quantity**: Rather than increasing evaluator diversity, we prioritized their expertise in language teaching and student behavior modeling to ensure accurate evaluations and minimize errors from background differences. A rigorous screening questionnaire (see Appendix D.3) ensured evaluators met the strict criteria showed in Appendix D.3.\\n- **Evaluation Consistency Analysis**: Using Fleiss's Kappa, we verified the consistency of evaluators across the same evaluation dimensions, further ensuring the scientific rigor and reliability of the evaluation results.\\n\\n**3. Supplementary Objective Metric Evaluation Experiments**\\n\\nAs noted in common concern 2 of the Rebuttal Summary, we incorporated additional metrics based on reviewers' suggestions. These metrics provided a quantitative view, highlighting language differences across personality traits and the need to combine objective and subjective evaluations.\\n\\nIn summary, the multidimensional optimizations addressed reviewer's concern about evaluation bias and diversity. Supplementary objective experiments further validated the subjective evaluation results. \\n\\n[1] Gao, M., Hu, X., Ruan, J., Pu, X., & Wan, X. (2024). Llm-based nlg evaluation: Current status and challenges. arXiv preprint arXiv:2402.01383.\\n\\n[2] Liu, Y., Iter, D., Xu, Y., Wang, S., Xu, R., & Zhu, C. (2023). G-eval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634.\\n\\n[3] Li, Y. (2023). A practical survey on zero-shot prompt design for in-context learning. arXiv preprint arXiv:2309.13205.\\n\\n[4] Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5), 378.\\n\\n[5] Hassan, N. F. B., Puteh, S. B., & Muhamad Sanusi, & A. B. (2019). Fleiss\\u2019s Kappa: Assessing the Concept of Technology Enabled Active Learning (TEAL). Journal of Technical Education and Training, 11(1).\"}", "{\"title\": \"Response to Reviewer yWBj's concern about simulating problem-solving and creative thinking in open-ended questions\", \"comment\": [\"***W2&Q1: Can the authors expand on how their system handles more complex cognitive behaviors beyond basic language learning?***\", \"**Response**: Thanks for the reviewer's attention to the complex cognitive behaviors, such as problem-solving and creative thinking, which are essential to assessing the realism of virtual students and their applicability across diverse educational contexts. As noted in common concern 3, we fully agree that these abilities are critical and welcome this opportunity to clarify and expand on how our study addresses these aspects.\", \"In this study, we designed the SOE pipeline with an emphasis on the importance of cognitive development in areas like problem-solving and creative thinking. Consequently, we included question types as a key evaluation dimension to reflect the virtual students' ability to operate at different cognitive levels (**see Sec 4.1 & Appendix B.2.2**):\", \"**Problem-Solving Ability**: This aspect primarily focuses on cognitive levels of application and analysis. We designed task-based questions closely related to the content of the lesson, requiring virtual students to engage in reasoning and analysis. For instance, when discussing reading material, virtual students with high extraversion were able to provide clear and logical answers based on the text.\", \"**Example Question**: \\u201cDuanmu Hongliang used many vivid words to express his feelings for his hometown. Can you find some examples and discuss them?\\u201d\", \"**Example Response**: \\u201cSure, teacher. For example, \\u201cMy hometown is in the northeast, with its dark, fertile soil, golden grains, red maple leaves, and white snow.\\u201d These descriptions make me feel his deep love for his hometown.\\u201d\", \"**Creative Thinking Ability**: This aspect evaluates students at the evaluation and creation levels of cognition. We designed open-ended questions to encourage students to express creative ideas, such as reinterpreting the theme of a passage or continuing a story. Highly extraverted virtual students demonstrated proactive responses.\", \"**Example Question**: \\u201cNow, let\\u2019s see whose imagination is the most creative. Based on the text, can you describe the image of the Yellow River in your own words?\\u201d\", \"**Example Response**: \\u201cSure, Teacher. In the poem, the Yellow River is depicted as a great mother who nourishes the entire Chinese land. Its surging waves symbolize the resilience and perseverance of the Chinese people, and its grandeur reflects their greatness and pride.\\u201d\", \"More detailed response examples at different learning stages are added in **Appendix D.2.2**. Through these designs, we have preliminarily validated the potential of LVSA to simulate problem-solving and creative thinking in complex cognitive behaviors.\", \"Our primary goal in this study is to propose a scientific and systematic SOE pipeline to address challenges in modeling and evaluating LVSA throughout the teaching process. Although comprehensive analysis of LVSA abilities is not the main focus at this stage, our experimental data suggests that LVSA already shows representative potential in complex cognitive behaviors, such as problem-solving and creative thinking.\", \"In future work, we plan to further expand this aspect through the following steps:\", \"**Refining the Cognitive Evaluation Framework**: Drawing on cognitive taxonomy theories (e.g., Bloom's taxonomy), we aim to design more fine-grained question types to comprehensively cover virtual students' performance in complex cognitive behaviors.\", \"**Analyzing the Relationship Between Personality and Cognitive Abilities**: Investigate how virtual students with different personality traits perform in problem-solving and creative tasks, to further improve the modeling accuracy of LVSA.\", \"**Expanding Multidisciplinary Scenarios**: Extend evaluation tasks to other disciplines (e.g., solving math problems, designing scientific experiments) to test LVSA s cognitive simulation capabilities across subject domains.\", \"We greatly appreciate the valuable suggestions regarding complex cognitive tasks. Through the evaluation dimension of question types, this study has preliminarily validated the potential of virtual students in problem-solving and creative thinking. In the future, we will conduct further in-depth analysis to continually enhance the simulation capabilities and educational value of LVSA.\"]}", "{\"title\": \"Response to Reviewer wxDr's concern about multimodal inputs\", \"comment\": [\"***W2&Q2: Could be how might the model be modified to incorporate multimodal inputs?***\", \"**Response**: We greatly appreciate the reviewer's attention to multimodal input. The issue you raised is thoroughly addressed in our study, as we place a high priority on the compatibility of multimodal information (**see Sec 3.2 & Sec 6**). The SOE framework proposed in this study is a highly compatible and general framework, designed with the need for multimodal extension in mind. As a result, we selected four multimodal large models as the base models, laying the technical foundation for future research transitioning from unimodal to multimodal approaches.\", \"Below is our detailed response.\", \"**1. The SOE Framework's Compatibility with Multimodal Information**\", \"**Research Focus of the SOE Framework**:\", \"This study focuses on language expression, the most explicit form of learning behavior for virtual students, and proposes a comprehensive research pipeline that spans data construction, environment validation, role modeling, and role evaluation. It is worth noting that the SOE framework was designed with potential multimodal extensions in mind. While the current research primarily focuses on the language modality, it has already selected large models capable of processing multimodal data as the base models. This provides a solid technical foundation for future multimodal research. In summary, this study effectively addresses the challenges of modeling and evaluating virtual students, particularly for early adolescents, and lays the groundwork for future extensions into multimodal research to fully simulate real classroom dynamics.\", \"**Technical Preparation and Future Directions**:\", \"In future work, we plan to incorporate visual and auditory modalities, such as students\\u2019 facial expressions, gestures, and tone of voice. This will not only enrich the model's input but also enhance its effectiveness and accuracy in simulating real classroom interactions.\", \"**2. Reasons for Not Conducting Multimodal Research**\", \"**Challenges in Multimodal Data Collection and Annotation**\", \"**Complex Data Collection Requirements**: Multimodal data, such as video, audio, and their synchronization with text, requires precise temporal alignment and processing. This is not only technically challenging but also demands meticulous control and extensive experimental design considerations in practice.\", \"**High Standards for Data Annotation**: Annotating multimodal data requires a high level of expertise, particularly in understanding and labeling students' non-verbal behaviors (e.g., gestures, facial expressions, and tone changes). Moreover, maintaining consistency and accuracy across modalities poses a significant challenge, often necessitating substantial time and financial resources.\", \"**Technical and Resource Limitations**\", \"**Computational Resource Requirements**: Processing and integrating multimodal data (e.g., visual and auditory information) typically demand higher computational capacity. At the current stage of research, given resource availability and management complexity, we chose to focus on the language modality to ensure the feasibility and high-quality output of our study.\", \"**Complexity of Model Training**: Developing and training multimodal models involves not only more complex data processing workflows but also addressing stability and optimization challenges during the training process. These technical difficulties have constrained us from conducting extensive multimodal research in the initial phase of the project.\", \"**Complexity of the Evaluation System**: Developing evaluation metrics suitable for multimodal outputs requires comprehensive consideration of factors such as linguistic coherence, visual accuracy, and auditory expressiveness. Establishing such an integrated evaluation system demands extensive preliminary research and methodological innovation.\", \"In summary, the SOE framework proposed in this study has been designed with compatibility for multimodal information in mind and has selected base models capable of supporting future multimodal research. This lays a solid technical foundation for extending from unimodal to multimodal studies in the future. We fully recognize the importance of multimodal input for modeling real classroom dynamics. However, due to the aforementioned challenges, this goal could not be achieved at the current stage of research. Looking ahead, we hope that with advancements in technology and increased resources, we can gradually overcome these obstacles and achieve truly comprehensive and diverse multimodal educational research. We sincerely thank the reviewer for understanding and support, and we look forward to making further breakthroughs in this field in the future.\"]}", "{\"title\": \"Response to Reviewer 3QkY's Concern About the Pipeline Novelty and Generalization (Part IV: The Role of Social Science Theories in the SOE Pipeline)\", \"comment\": [\"***Reviewer\\u2019s Concern on Theoretical Concepts Do Not Contribute to the Core Model***\", \"We understand the reviewer's concern regarding the application of theoretical foundations to the core contribution of this research.\", \"Below, we respond to your concerns by explaining the motivations, challenges, and core contributions of the theoretical research incorporated into this study:\", \"**Motivation for Introducing Theoretical Frameworks**\", \"As previously mentioned, the integration of interdisciplinary theories not only plays a crucial role in constructing the dataset but also enhances the model's interpretability, fairness, and generalization ability. The theoretical frameworks used in this study, including Bloom's Taxonomy of Cognitive Domains, the Big Five Personality Theory, and educational foundation theories, are all well-established, widely accepted, and scientifically validated. **With nearly 60 years of academic development, these theories are universally applicable and scientifically grounded, making them ideal for meeting the needs of the SOE framework across multidisciplinary, multicultural, and diverse educational contexts.** They are especially crucial for avoiding cultural bias and improving the framework\\u2019s generalization capacity.\", \"**Challenges in Integrating Theoretical Frameworks**\", \"**Balancing AI Paradigms and Social Science Theories**: Interdisciplinary research requires balancing the quantitative AI paradigms with the qualitative research methods of social science theories. **AI models require quantifiable and computable data, whereas social science theories necessitate more interpretive and analytical approaches.** In designing the SOE pipeline, we carefully balanced these two distinct paradigms, **ensuring the computational requirements of AI were met while respecting the interpretability of social science theories.**\", \"**Transforming Unquantifiable Educational Theories into Operational Frameworks**: Many educational theories, such as cognitive learning theories and personality theories, are abstract and difficult to quantify. Therefore, we needed to find appropriate frameworks and methods to operationalize these theories for application in virtual student modeling.\", \"**Selecting Theories for Broad Applicability**: The fields of education and psychology are rich with theories, each offering some degree of value. However, choosing the most universally applicable theories that align with the goals of the model is crucial. We selected those theories that could be widely applied across different disciplines and cultural backgrounds, ensuring the SOE framework\\u2019s broad applicability and generalization capacity.\", \"**Core Contributions of Theoretical Concepts in the SOE Pipeline**\", \"**Bloom's Taxonomy**: As one of the most classic and universally applicable cognitive theories, Bloom's Taxonomy provides a clear hierarchical framework for modeling student learning progress across various educational settings. **This theory enhances the interpretability and generalizability of virtual student cognition modeling.**\", \"**Big Five Personality Theory**: The Big Five personality theory, applied in virtual student modeling, helps avoid cultural bias and is widely applicable across diverse student backgrounds. This theory allows for a more realistic simulation of student personality traits, **making virtual student behavior more diverse and flexible, thus improving personality diversity and model interpretability.**\", \"**Educational Foundation Theories**: In terms of teaching theory, whether in the design of teaching stages or question types, we selected theories that are adaptable to multi-disciplinary and multicultural contexts, **ensuring the disciplinary suitability of the SOE pipeline.**\", \"The theoretical concepts introduced in this study, which have been rigorously tested and analyzed over nearly 60 years, have been scientifically validated for their universality and applicability. **By integrating interdisciplinary theories from education and psychology, while accommodating both the AI quantitative research paradigm and the qualitative research paradigm of social science theories, we have ensured that the pipeline is both operational and scientifically grounded.** This integration also enhances the model\\u2019s interpretability, generalization, and cross-disciplinary adaptability.\", \"Therefore, **one of the core contributions of the SOE framework is that it provides scientific theoretical support for virtual student modeling through interdisciplinary theories.** These theories are critical in addressing the complexity of virtual student modeling and enhancing the model\\u2019s depth and breadth of application. We firmly believe that this framework will provide an important theoretical foundation and methodological guidance for future research in educational AI and virtual student modeling.\"]}", "{\"title\": \"Overall\", \"comment\": \"Overall, I'd like to say thanks to the authors for the efforts to address the concerns and limitations. Three main concerns are still not solved yet:\\n\\n1. Framework novelty: there is no foundational novelty or breakthrough in LLMs' reasoning for student simulation. The framework looks more like a data processing pipeline that integrates several data structure modules. Although several new concepts are introduced, they do not contribute to the core model contribution.\\n\\n2. Lack of baselines: there is no baseline comparison to show the unique advantage of the proposed framework.\\n\\n3. Generalization: only one dataset from the authors is evaluated. No public datasets are used for evaluation.\\n\\nTherefore, I will maintain my score.\"}", "{\"comment\": \"Thanks for the detailed reply.\\n\\n1. If you compare the simulation performance pre and post fine-tuning, I assume that you should compare the same simulated students per and post fine-tuning. Then you should use repeated-measures ANOVA, instead of a straightforward two-way ANOVA. \\n\\n2. A significant difference among different student types does not necessarily mean that the simulation is realistic. It can just prove that the different simulated students' types are significantly different. \\n\\n3. Non-significant p-value is not equal to similarity. When the p-value in an ANOVA test is larger than 0.05, it means that the evidence is not strong enough to reject the null hypothesis (which assumes there is no significant difference between the group means). However, this does not necessarily imply that the conditions are \\\"very similar\\\".\"}", "{\"summary\": \"The paper introduces a novel AI framework, the \\\"Scene-Object-Evaluation\\\" (SOE) pipeline, designed to model Large Language Model-based Virtual Student Agents (LVSA) that mimic the behaviors and personalities of early adolescent students in educational settings. Unlike previous work that uses large language models (LLMs) to simulate expert teaching roles, this study focuses on creating realistic, student-like agents that reflect common learning challenges, emotional reactions, and personality differences.\", \"the_soe_pipeline_operates_in_three_stages\": \"Scene, which defines the educational scenarios and content; Object, which constructs virtual student agents with distinct personality traits through LoRA fine-tuning; and Evaluation, which assesses these agents' performance using human and GPT-4 evaluations. The virtual students are evaluated based on traits derived from the Big Five Personality Traits framework to ensure a range of realistic, individualized behaviors. Experiments show that these fine-tuned virtual agents provide human-like and personalized responses, validating their potential to support pre-service teacher training by offering realistic classroom simulations.\\n\\nThis research contributes a new approach to AI4Education, focusing on developing student agents that improve teachers\\u2019 instructional skills through realistic interactions, and lays groundwork for future applications in teacher training and multi-agent simulations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper brings a high degree of originality by shifting the focus from expert or teacher-oriented AI simulations to student simulations, addressing a significant gap in AI-driven teacher training tools. The introduction of the Scene-Object-Evaluation (SOE) pipeline to model Large Language Model-based Virtual Student Agents (LVSA) represents a creative approach to personalized student simulation, providing an adaptable framework that mirrors the complexity of real students. Using the Big Five Personality Traits for developing distinct virtual student personalities adds further novelty, offering a method for producing varied and realistic student responses that go beyond standard, pre-scripted models.\\n\\nThe research methodology is robust, leveraging the SOE pipeline to guide each stage of student agent construction. The use of LoRA fine-tuning to adapt the LLMs to personality-based response styles is well-executed, supported by a well-defined dataset and clear criteria for generating realistic dialogues. The authors conduct subjective evaluations through human raters and GPT-4, lending credibility to the LVSA models\\u2019 authenticity in simulating student behaviors. This multi-dimensional evaluation demonstrates thoughtful experimentation, allowing for both quantitative and qualitative assessments of the LVSA\\u2019s effectiveness. The use of the Big Five traits adds psychological rigor, enhancing the methodological depth and grounding the work in established personality theory.\\n\\nThe paper is generally clear in its presentation, with each step of the SOE pipeline outlined in a logical and accessible manner. Figures, particularly the pipeline overview, provide helpful visual support, clarifying the process of constructing and evaluating virtual student agents. The experimental setup, while complex, is conveyed effectively, making it easier for readers to understand the key steps and outcomes. However, more detailed examples of LVSA responses across personality types could further improve the clarity and transparency of the results. Expanding on specific evaluation metrics and providing examples of dialogue interactions for different personality types would also enhance readability.\\n\\nThe significance of this work is substantial, as it addresses an important gap in pre-service teacher training by providing a tool for realistic, student-like simulations. In traditional teacher training, access to diverse student interactions is often limited, and this work offers a scalable solution to that problem. The LVSA model provides a unique contribution by capturing the variability of student responses and behaviors, allowing teachers to practice and adapt to different learning personalities and styles. By simulating more realistic classroom scenarios, the LVSA approach has the potential to improve teacher preparedness, adaptability, and effectiveness. These contributions align well with the ICLR community\\u2019s interest in novel applications of AI, particularly in education and social simulation, and could stimulate further research into AI-driven, personalized learning simulations.\", \"weaknesses\": \"Issue: The evaluation primarily relies on subjective assessments from human raters and comparisons with GPT-4, which introduces variability in interpretation and limits reproducibility.\", \"suggestion\": \"Discussing strategies for scaling or optimizing the model\\u2019s computational efficiency would improve the framework\\u2019s usability for larger educational deployments. Efficiency improvements could also make LVSA more accessible to institutions with limited computational resources.\", \"issue\": \"The SOE pipeline's adaptability to varying classroom contexts, cultural backgrounds, or different educational levels is not addressed, which may affect its broader applicability and restrict generalization across diverse educational settings.\", \"questions\": \"Question: Beyond human and GPT-4 evaluations, did you consider incorporating objective metrics such as linguistic coherence, response diversity, or sentiment analysis to evaluate the LVSA\\u2019s responses?\", \"question\": \"Given the computational demands of generating varied personality-based responses, do you have plans to optimize the SOE pipeline for efficiency, perhaps through model distillation or response sampling techniques?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer wxDr's concern about the generalizability of SOE pipeline and virtual student performance in a multidisciplinary context\", \"comment\": \"***W3&Q1: How virtual students could be validated in settings beyond junior high language tasks?***\\n\\n**Response**: We appreciate the reviewer's attention to the model's adaptability to multidisciplinary scenarios. It is important to clarify that the choice of language-related tasks in this study was made to provide the most suitable experimental environment (**see Sec 3**). This allowed us to validate whether the SOE pipeline consistently exhibits human-like and personalized characteristics throughout the teaching process, thereby demonstrating the framework\\u2019s effectiveness and generalizability (**as discussed in the common concern 1**). The SOE framework is inherently generalizable and modular, designed to flexibly adapt to the requirements of different disciplines, offering a clear and actionable research pathway for multidisciplinary applications. Based on the reviewer's suggestion, in future studies, we plan to explore further subject-specific modeling, cross-disciplinary validation, and practical applications in education to fully unlock the potential of virtual students. \\n\\nBelow is our detailed response.\\n\\n**1. Discipline Selection in the Current Study**\\n\\n- **Reasons for Choosing Language Studies**: \\nLanguage studies hold a central position in school education and are highly open-ended and expressive. This not only facilitates validating the adaptability and personalized behavior of virtual students in simulating human learning behaviors but also enables in-depth exploration of interaction dynamics in teaching. Moreover, compared to tasks in mathematics, science, or other STEM disciplines, the complexity and diversity of language studies provide rich contexts for evaluating the performance of virtual students, contributing to a comprehensive assessment of the model's potential educational applications.\\n- **Focus on a Single Discipline in Research Practice**:\\nIn the initial stages of virtual student research, selecting a representative discipline allows for a deeper analysis and optimization of the core mechanisms of the developed model. This approach helps ensure the quality and depth of the model while gradually expanding its application to other disciplines.\\n\\n**2. Challenges in Multidisciplinary Expansion** \\n\\n- **Challenges of Interdisciplinary Differences**: \\nDifferent disciplines have significant variations in knowledge structure, learning objectives, and teaching methods. For instance, mathematics and science focus more on logical reasoning and experimental validation, which require the model to possess processing and reasoning capabilities distinct from those needed in language studies.\\n- **Complexity of Evaluation and Validation**: \\nExpanding to a multidisciplinary environment necessitates the development of evaluation methods tailored to the characteristics of each discipline. This involves not only assessing the accuracy of domain-specific knowledge but also effectively simulating student behaviors and responses in diverse disciplinary learning contexts.\\n\\n**3. Future Research Directions** \\n\\n- **Development of Multidisciplinary Models**: \\nGiven the flexibility and scalability of the SOE framework, we plan to extend our research to include more disciplines such as mathematics and science. This will involve adapting the existing model and developing new modules to meet the specific needs of different disciplines.\\n- **Simulation of Cross-Disciplinary Learning Scenarios**: \\nWe also aim to explore the application of virtual students in cross-disciplinary learning environments, such as integrated teaching scenarios combining mathematical logic and scientific experiments. This will further enhance the practicality and educational value of the model.\\n\\nIn summary, while the current study primarily focuses on language disciplines, we have already anticipated the potential for the model to expand into multiple disciplines within the SOE framework. We look forward to addressing the technical and methodological challenges of cross-disciplinary applications in the future, aiming to achieve a comprehensive and diverse virtual student simulation system. We sincerely thank the reviewer for the valuable suggestion, which provides valuable guidance for the direction of our future work.\"}", "{\"title\": \"Thanks for the affirmation and encouragement of the Reviewer wxDr\", \"comment\": \"We are deeply grateful for your thoughtful and comprehensive review and for the time you invested in re-evaluating our work. **Your recognition of our efforts, evidenced by the increase in score from 3 to 5, greatly encourages our team.** This enhanced score and your positive feedback have significantly boosted our confidence and solidified our resolve to tackle the substantial challenges inherent in interdisciplinary research.\\n\\n**The complexities of modeling and evaluating virtual students, along with integrating diverse approaches and multiple theoretical frameworks, are indeed formidable.** However, your acknowledgment of the potential impact of our work not only validates our current efforts but also strengthens our commitment to advancing this research. Encouraged by your insights, we are dedicated to further enhancing the SOE pipeline in varied educational settings and are particularly excited about exploring the integration of multimodal information to create more dynamic and realistic educational environments. **We aim to continuously refine our approach based on your valuable feedback, ensuring our work remains at the forefront of educational technology and AI, and expands across different disciplines and cultures.**\\n\\nThank you once again for your constructive feedback and encouragement, which guide us toward further advancements in this crucial area of research.\"}", "{\"title\": \"Response to Reviewer 3QkY's concern about model selection and evaluation consistency Metrics\", \"comment\": \"***Q6: Why do you use other LLMs to perform such simulations and use GPT 4 to serve as the evaluator? Why 0.6806 is acceptable and why 0.6806 is within the range of substantial agreement?***\\n\\n**Response**: Thanks for the reviewer's attention to the model selection and consistency metrics used in this study. We chose models other than GPT-4 for virtual student modeling to ensure the generalizability of the SOE pipeline, facilitate practical application, and minimize biases from circular validation. Additionally, 0.6806 is the result of statistical analysis using Fleiss\\u2019s Kappa, a commonly used statistical measure in the social sciences. Unlike conventional metrics such as accuracy in the AI field, Fleiss\\u2019s Kappa focuses on assessing agreement levels among raters, emphasizing a different analytical perspective. \\n\\nBelow, we will elaborate on the reasons for using different LLMs for LVSA modeling and GPT-4 as the evaluation tool, as well as explain the meaning and standards of the 0.6806 consistency score.\\n\\n**1. Reasons for Using Multiple Models for Virtual Student Modeling**\", \"we_selected_different_llms_for_lvsa_modeling_for_the_following_reasons\": \"- **Generalizability and Scalability**: The SOE pipeline is designed as a generalizable framework to explore how different models perform in virtual student modeling. Using multiple models allows us to comprehensively test the framework\\u2019s adaptability rather than focusing on the performance of a single model.\\n- **Practical Application Needs**: In real-world educational applications, systems often need to operate in resource-limited environments, such as remote areas or on low-capacity devices. Using models with relatively lower resource requirements ensures the wide applicability of virtual student systems.\\n- **Reducing Bias from Circular Validation**: Employing GPT-4 for both modeling and evaluation could introduce self-validation biases, compromising the independence and objectivity of experimental results. By using GPT-4 solely as an evaluation tool, we maintain the independence of the evaluation process.\\n\\n**2. Why GPT-4 Was Chosen as the Evaluation Tool**\\n\\nGPT-4 has shown remarkable performance in assessing generative tasks and is widely used in similar studies [8-9]. Using GPT-4 as the evaluation tool achieves the following objectives:\\n- **Efficiency**: GPT-4 can process large-scale data quickly and assess virtual student performance across multiple dimensions (e.g., language fluency, emotional expression, and logicality), significantly improving evaluation efficiency.\\n- **Quality Assurance**: In this study, we validated the consistency between GPT-4 and human evaluators, calculating Fleiss\\u2019s Kappa coefficient of 0.6806. Fleiss\\u2019s Kappa is a standard method for measuring inter-rater agreement and is particularly suitable for assessing consistency among multiple evaluators [10-11]. It provides a quantitative score ranging from 0 to 1, where 1 indicates perfect agreement and 0 indicates no agreement. According to Landis's standards, this score falls within the \\\"substantial agreement\\\" range [12]. This result confirms the strong correlation between GPT-4\\u2019s evaluations and those of human evaluators, demonstrating its effectiveness in replacing human evaluators for certain large-scale tasks.\\n\\n**3. Explanation of the 0.6806 Consistency Score**\\n\\nFleiss\\u2019s Kappa values range from 0 to 1, with higher values indicating greater agreement. According to Landis's (1977) classification [12]:\\n- 0.40\\u20130.60: Moderate agreement\\n- 0.60\\u20130.80: Substantial agreement\\n- 0.80\\u20131.00: Almost perfect agreement\\n\\nThe score of 0.6806 in our study, based on this standard, indicates \\\"substantial agreement.\\\" This demonstrates that GPT-4\\u2019s evaluation results align closely with human evaluations, supporting its validity and reliability as an evaluation tool.\\n\\nThrough this explanation, we hope to clarify concerns about model selection and consistency metrics, while emphasizing the rationality and scientific rigor of our research methods. Thanks again for the valuable comment.\\n\\n[8]Sottana, A., Liang, B., Zou, K., & Yuan, Z. (2023). Evaluation metrics in the era of GPT-4: reliably evaluating large language models on sequence to sequence tasks. arXiv preprint arXiv:2310.13800.\\n\\n[9]Zhang, S., Dong, L., Li, X., Zhang, S., Sun, X., Wang, S., ... & Wang, G. (2023). Instruction tuning for large language models: A survey. arXiv preprint arXiv:2308.10792.\\n\\n[10] Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5), 378.\\n\\n[11] Hassan, N. F. B., Puteh, S. B., & Muhamad Sanusi, & A. B. (2019). Fleiss\\u2019s Kappa: Assessing the Concept of Technology Enabled Active Learning (TEAL). Journal of Technical Education and Training, 11(1).\\n\\n[12] Landis, J. R. (1977). The Measurement of Observer Agreement for Categorical Data. Biometrics.\"}", "{\"title\": \"Response to Reviewer 3QkY's Concern About the Pipeline Novelty and Generalization (Part V: Generalization of the SOE pipeline regarding dataset smilarity)\", \"comment\": \"**2. Generalization of the SOE Framework**\\n\\nWe fully understand the reviewer's concern regarding the lack of validation using multiple datasets. While multi-dataset validation is indeed a key step in testing the generalization and robustness of AI models, the SOE pipeline proposed in this study is not a traditional AI algorithm model. Rather, **it is an operational framework designed for applications in educational internships and social simulation experiments**. In our study, **generalization focuses more on the adaptability and flexibility of the framework, particularly in supporting teaching internships across different educational stages and providing personalized, human-like feedback for various question types.** These capabilities have been validated through our experiments, and the generalization aspect was addressed in response to common concern 1.\\n\\nThe reviewer\\u2019s suggestion to validate the framework using multiple datasets is more aligned with viewing the SOE pipeline as an AI algorithm model, which diverges from the objectives of our research. **Our framework is not aimed at optimizing a single algorithm but rather serves as a comprehensive research tool intended to provide solutions across interdisciplinary and educational contexts**. While multi-dataset validation is crucial for AI algorithm models, it is not the core of our work. Nonetheless, we are happy to engage further with the reviewer\\u2019s suggestions and address each point in detail:\\n\\n***Reviewer\\u2019s Concern on Similarity of the Dataset to Existing Educational Datasets***\\n\\nThe reviewer pointed out that our dataset \\\"is just similar to most education datasets. Maybe the language is different in Chinese. However, there are also many Chinese dataset available.\\\"\\n\\n- We acknowledge the existence of various educational datasets, such as the SQuAD reading comprehension dataset created by Stanford [5], the NCES dataset by the U.S. National Center for Education Statistics [6], and the Chinese evaluation dataset C-eval [7]. However, **these datasets typically evaluate model competency in subject-specific contexts or predict students' academic performance. They often focus on fine-tuning large educational models resembling experts, which does not align with our study's goals.** The dataset we have built is not only for educational context evaluation but also supports the personalized modeling of virtual students, incorporating cognitive levels and personality traits. \\n\\n- Additionally, while there are many Chinese dialogue datasets [8-10], they primarily consist of conversations from films or social media, involving adults rather than adolescents, and thus do not match our educational context. **The dataset in our study, built on a scientific theoretical framework, addresses the detailed needs of personalized modeling and ensures that teacher-student dialogues avoid cultural biases while covering all stages of teaching.** This design better supports comprehensive teaching internships for pre-service teachers.\\n\\n[5] Rajpurkar, P., Jia, R., & Liang, P. (2018). Know what you don't know: Unanswerable questions for SQuAD. arXiv preprint arXiv:1806.03822.\\n\\n[6] NCES. https://nces.ed.gov/\\n\\n[7] Huang, Y., Bai, Y., Zhu, Z., Zhang, J., Zhang, J., Su, T., ... & He, J. (2024). C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. Advances in Neural Information Processing Systems, 36.\\n\\n[8] Chinese-chatbot-corpus. https://github.com/codemayq/chinese-chatbot-corpus\\n\\n[9] Wang, Y., Ke, P., Zheng, Y., Huang, K., Jiang, Y., Zhu, X., & Huang, M. (2020). A large-scale chinese short-text conversation dataset. In Natural Language Processing and Chinese Computing: 9th CCF International Conference, NLPCC 2020, Zhengzhou, China, October 14\\u201318, 2020, Proceedings, Part I 9 (pp. 91-103). Springer International Publishing.\\n\\n[10] Zhang, Z., Li, J., Zhu, P., Zhao, H., & Liu, G. (2018). Modeling multi-turn conversation with deep utterance aggregation. arXiv preprint arXiv:1806.09102.\"}", "{\"comment\": \"Thanks for the reply. It does not make sense to me. If there is no closely related baseline, then you should at least use a relatively related baseline. Otherwise, it can't demonstrate the superiority of the proposed framework.\"}", "{\"title\": \"Response to Reviewer 3QkY's Concern About ANOVA Analysis\", \"comment\": \"We sincerely thank the reviewer for the detailed feedback on our experimental design and statistical analysis methods. We understand the concerns raised and would like to further clarify our experimental approach and analysis methods:\\n\\n**1. Use of Repeated Measures ANOVA vs. Two-Way ANOVA**\\n\\nThe reviewer suggested using a repeated measures ANOVA instead of a two-factor ANOVA. We appreciate this suggestion but would like to clarify that, due to the simulation of five different virtual student personalities in our experiment, repeated measures ANOVA is not appropriate. The significant differences between each virtual student type result in strong independence between the data, and **our primary interest lies in examining the interaction effect between virtual student types and the pre- and post-fine-tuning conditions.** Therefore, we opted for a two-factor ANOVA, which better captures the differences in performance across the different student types before and after fine-tuning. **The results confirmed significant differences pre- and post-fine-tuning, with varying trends in response to fine-tuning across the different virtual student types.**\\n\\n**2. Relationship Between Significant Differences in Student Types and Simulation Authenticity**\\n\\nWe fully agree with the reviewer's point that \\u201csignificant differences between virtual student types do not imply authenticity of the simulation.\\u201d However, this experimental conclusion does not aim to prove the authenticity of virtual students. It seems there may have been some confusion regarding the purposes of our two-way ANOVA and one-way ANOVA experiments. We would like to clarify:\\n- The goal of the **two-way ANOVA** experiment was to demonstrate **the interaction effect of the five virtual student types before and after fine-tuning**, i.e., to show that fine-tuning is effective for virtual students and that different virtual students respond differently to fine-tuning.\\n- The goal of the **one-way ANOVA** experiment was to show that **there is no significant difference between the responses of virtual students and real students**, i.e., the probability of virtual student responses being identified as real students is not significantly different from that of real student responses being identified as real students.\\n\\nThus, the significant differences between virtual student types reported in the two-factor ANOVA experiment are unrelated to our hypothesis regarding the \\\"authenticity\\\" of virtual students. **This experiment specifically addresses the significant differences in virtual student performance before and after fine-tuning, not the authenticity of virtual students.**\\n\\n**3. Interpretation of Non-Significant p-values and Similarity**\\n\\nThe reviewer pointed out that \\\"a non-significant p-value does not equate to similarity,\\\" and we fully agree with this statement. The statement that \\\"when the p-value is greater than 0.05, we cannot reject the null hypothesis\\\" is accurate. However, this conclusion should be further analyzed in the context of the Turing test experimental data. **By comparing the probabilities of the virtual student group and the real student group being assessed as real students, we concluded that there is no significant difference between virtual and real students across 10 evaluators.** This means that evaluators were unable to distinguish between virtual student responses and real student responses, thus indicating that the virtual student responses were sufficiently similar to real student responses. This supports our hypothesis regarding the authenticity of virtual students in the Turing test.\\n\\nWe appreciate the reviewer's careful review of our work, and we hope that our responses above have clarified the concerns raised.\"}", "{\"title\": \"Common Concern 2\\uff1aSupplementary Experiments on Objective Evaluation Metrics\", \"comment\": \"We sincerely thank the reviewers for their concerns regarding the evaluation methods in this study. We fully agree that incorporating objective metrics can enhance the scientific rigor of the evaluation process. Accordingly, we have conducted additional experiments, and the results align with the conclusions drawn from subjective evaluations (**see Sec 5.5 & Appendix D.6**). These findings validate the personalization and effectiveness of LVSA modeling while also revealing the limitations of objective metrics, further supporting the core motivation of this study: addressing the challenge of \\\"difficult evaluation of virtual students.\\\"\", \"below_is_our_detailed_response\": [\"**1. Selection of Objective Evaluation Metrics**\", \"We reviewed commonly used objective evaluation metrics in the field of text generation. Due to the open-ended and diverse nature of student language expression in educational scenarios, traditional reference-based evaluation metrics (e.g., BLEU, ROUGE) are not suitable. Therefore, we selected reference-free objective metrics that better meet the needs of educational scenarios, including:\", \"**Text Token**: Measures the length of expressions across students with different personalities.\", \"**Perplexity**: Evaluates the fluency of language generation and model adaptability.\", \"**Type-Token Ratio (TTR)**: Reflects the diversity of language expression.\", \"**Sentiment Analysis**: Indicates the emotional tendencies in language expression.\"], \"the_last_three_objective_evaluation_metrics_correspond_to_the_aspects_you_mentioned\": \"**linguistic coherence, response diversity, and sentiment analysis**.\\n\\n**2. Objective Evaluation Results and Limitations**\\n\\nObservations from objective metrics yielded results consistent with subjective evaluations, demonstrating that the virtual students effectively simulated student behaviors associated with different personality traits. Using min-max normalization to visualize the experimental results, we found that the HE (High Extraversion) personality performed prominently in clarity and positive sentiment, reflecting its outgoing nature and fluency in expression. The HA (High Agreeableness) personality exhibited high lexical richness and positive sentiment, indicative of rich expression and strong cooperativeness. In contrast, the LO (Low Openness) and LC (Low Conscientiousness) personalities scored lower in clarity and lexical richness, consistent with their concise, conservative, or casual language styles. (Detailed data analysis results can be found in **Sec 5.5 & Appendix D.6**).\\n\\nHowever, we also identified limitations in objective evaluation, which further underscore the importance of subjective evaluations in capturing the complexity of student language and emotional expression.\\n- **Bias in TTR**: TTR is highly dependent on text length, which may lead to overestimation of shorter responses (e.g., \\\"It's from the Song Dynasty.\\\" has a lexical richness of 1).\\n- **Specificity of Perplexity**: Perplexity can increase due to the presence of classical Chinese or technical terms in the text, obscuring the actual fluency of the student's expression (e.g., \\\"It's a narrative text.\\\" has a perplexity of 119.8).\\n- **Directional Bias in Sentiment Analysis**: Sentiment analysis can be influenced by keywords from the teaching content, leading to deviations from the true emotional state (e.g., \\\"The theme of this poem is filial piety and loyalty. It praises Mulan's devotion and longing for a peaceful life.\\\" is merely a description of textbook content but was misclassified as positive sentiment).\\n\\n**3. Importance of Subjective Evaluation**\\n\\nBy combining subjective evaluation with GPT-4 automated evaluation, we have achieved a good balance between scientific rigor and efficiency. Subjective evaluation captures the emotional and cognitive complexity in students' language, while GPT-4 automated evaluation provides greater consistency and rapid verification capabilities. This dual approach makes the evaluation both comprehensive and efficient, offering a highly practical solution for future virtual student modeling.\\n\\nWe sincerely thank the reviewers' valuable suggestions regarding objective evaluations. These recommendations not only provided important references for the evaluation system in this study but also broadened our perspective on virtual student modeling and evaluation.\"}", "{\"summary\": \"++ I have increased the scores after the clarifications from the Authors.\\n\\nThis paper introduces a novel approach to using Large Language Models (LLMs) for simulating virtual students, instead of the more common focus on AI-driven teacher models. The authors propose a framework called SOE (Scene-Object-Evaluation) for generating LLM-based Virtual Student Agents (LVSA). This framework is applied within educational contexts, particularly for teacher training. The key contributions include:\\n\\n1.Theoretical Framework: A comprehensive model for constructing LVSA based on personalized teacher-student interaction data.\\n2.Human and GPT-4 Evaluations: The integration of human subjective evaluation with GPT-4's capabilities to assess the authenticity of virtual student agents.\\n3.Experimental Validation: Extensive experiments confirm the feasibility of generating human-like, personalized virtual student agents for improving teacher training.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Originality: The paper presents a novel shift from teacher-centric AI systems to virtual student simulations, which can greatly impact teacher training and educational AI systems. By focusing on virtual students, it provides a fresh and creative problem formulation, opening new avenues in AI4Education research.\", \"quality\": \"The paper demonstrates a well-structured methodology, from developing a dataset of teacher-student interactions to fine-tuning LLMs and validating their outputs with human and GPT-4 evaluations. The theoretical and operational frameworks are well thought out, ensuring rigorous scientific contribution.\", \"weaknesses\": \"Limited Real-world Data Application: While the framework is well-developed, the fine-tuning process relies on datasets sourced from controlled environments, which may not fully capture the complexity of real-world classroom dynamics. Future work should consider using more diverse and naturalistic data for better generalization.\", \"evaluation_focus\": \"The evaluation primarily centers on language comprehension and emotional simulation. However, cognitive challenges like problem-solving or more complex reasoning skills, which are critical to student behaviors, are underexplored. This may limit the scope of the virtual students' realism in broader educational contexts.\", \"questions\": \"Handling Cognitive Diversity: Can the authors expand on how their system handles more complex cognitive behaviors beyond basic language learning? For example, how well can LVSA simulate problem-solving tasks or deal with creative thinking in open-ended questions?\", \"mitigating_llm_hallucinations\": \"What specific strategies can be employed to minimize hallucinations in virtual student responses? Would incorporating additional real-world classroom data or using more advanced fine-tuning methods help in this regard?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper tries to simulate student behaviors (as opposed to teachers behavior) using LLMs and in order to do that, they have proposed SOE (Scene-Object-Evaluation) framework. They evaluate these virtual students using multi-dimensional experiments.\", \"the_paper_has_three_contributions\": \"1) A theoretical framework for LVSA\\n2) The integration of human subjective evaluation metrics into GPT-4\\n3) An empirical demonstration that virtual student agents can closely emulate real student behaviors\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Applying virtual students for teacher training can be potentially impactful.\", \"Paper is easy to follow\"], \"weaknesses\": \"- Reliance on subjective metrics and GPT-4 assessments could introduce biases. Personality-based biases (relying on the Big Five traits -> lead to stereotypical behaviors) and cultural biases (focusing on Chinese students could limit generalizability).\\nUse a more diverse range of evaluators. \\n\\n- Limited realism. Not supporting multimodal inputs limits its usefulness. Incorporating visual and audio elements, at a minimum, could better reflect real-world classroom dynamics.\\nReal classroom settings often rely on visual and auditory cues.\\n\\n- Needs more varied context. The model should have the capability to simulate more subjects like math and science. This would allow for a more relevant model.\\nThe need for a model that is capable of advanced reasoning and interpretation.\", \"questions\": \"Some important questions could be how virtual students could be validated in settings beyond junior high language tasks? How might the model be modified to incorporate multimodal inputs? What steps are being taken to address biases that might occur in the evaluations? Did the authors explore additional evaluation metrics? Could the authors elaborate on the ethical implications of simulating student behaviors (data privacy could be a concern)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 3QkY's concern about statistical analysis for human evaluation metrics\", \"comment\": \"***Q11: For section 5.2 human Turing test, simply showing the average results of all human evaluators is not a standard nor convincing approach. A formal statistical analysis such as ANOVA is needed.***\\n\\n**Response**: Thanks for suggesting the inclusion of formal statistical analysis, such as ANOVA, to enhance the evaluation in Sec 5.2 of our study. Considering the significant differences in subjective evaluation data from 10 evaluators pre and post fine-tuning, and the absence of noticeable differences with the real student control group, our original analysis focused on large-scale evaluations using GPT-4. \\n\\nBased on the reviewer's valuable feedback, **we have added the ANOVA analysis in Sec 5.2 and Appendix D.4.2, including two-way ANOVA for five types of student personalities pre and post fine-tuning, as well as one-way ANOVA comparing fine-tuned LVSA (HE, HN, HA, LO, LC) and real student.** These analyses demonstrate that fine-tuned LVSA significantly improved in human-likeness and showed no significant differences compared to real students. \\n\\nBelow are the details.\\n\\n**1. Two-Way ANOVA for Five Personality Types pre and post Fine-Tuning**\\n- **Experimental Design:**\\nWe conducted a fully randomized two-way ANOVA, with the probability of being identified as a real student (Probability) as the dependent variable. The analysis covered five types of student personalities\\u2014high neuroticism (HN), high agreeableness (HA), high extraversion (HE), low conscientiousness (LC), and low openness (LO)\\u2014before (Pre) and after (Post) fine-tuning. The goal was to detect the main effects of student types, fine-tuning status, and their interactions.\\n- **Experiment Results:**\\n - **Main Effect of Student Types**: Significant (p < 0.001), indicating that the five personality types differ significantly in their likelihood of being identified as real students.\\n - **Main Effect of Fine-Tuning**: Highly significant (p < 0.001), showing that fine-tuning greatly improves the human-likeness of virtual students.\\n - **Interaction Effect**: Significant (p < 0.001), demonstrating that different student types respond differently to fine-tuning.\\n\\nWe also visualized the results with an interaction effect plot as shown in Fig. A22. The graph shows that post-fine-tuning, the differences between personality types narrowed, and the probability of being identified as real students approached 1.0 for all types. This indicates that fine-tuning significantly enhanced the model's generative capabilities. The varying slopes of the lines illustrate that different types of students responded differently to fine-tuning.\\n\\n**2. One-Way ANOVA Comparing Fine-Tuned Virtual Students and Real Students**\\n- **Experimental Design:**\\nTo further validate the human-likeness of fine-tuned virtual students, we conducted a one-way ANOVA comparing the performances of five types of fine-tuned LVSA (HN, HA, HE, LC, LO) and real student (RS), using the probability of being identified as real student as the dependent variable.\\n- **Experiment Results:**\\nThe results show no significant differences (p > 0.05) between fine-tuned virtual students and real students, indicating that the human-likeness of virtual students closely matches that of real students. Further analysis revealed minimal differences in mean scores between groups, with high consistency in data distributions.\\n\\nThese additional statistical analyses strengthen the scientific basis of our findings, addressing the reviewer' s concerns and enhancing the rigor of our study. Thanks again for the valuable feedback, which has significantly improved the quality of our research.\"}", "{\"title\": \"Response to Reviewer rtkz's concern about adding analysis of the generated LVSA Responses\", \"comment\": \"***W3&Q3: Could you provide specific examples of how LVSA responses differ across personality types, particularly with the Big Five traits?***\\n\\n**Response**: We sincerely thank the reviewer for the kind suggestion to include specific examples of LVSA responses with different personality traits. We fully agree that showcasing behavioral manifestations through concrete examples is important. Based on the reviewer\\u2019s suggestion for a more fine-grained and in-depth analysis of the generated responses, we have further supplemented these analyses in **Sec 4.3 & Appendix D.2.2**. \\n\\nBelow is our detailed response to this issue.\\n\\n**1. Content Analysis Method**: Taking the Internvl model as an example, we conducted an in-depth analysis of the performance of different virtual students across various teaching stages.\\n\\n- **Pre-lesson Introduction Phase**: Questions in this phase are simpler, focusing on setting up the learning context. HN students exhibit nervousness and repetition in response to closed questions. For instance, when asked \\u201cDo you know who wrote Hymn to the Yellow River?\\u201d, they repeated phrases like \\\"Uh, it\\u2019s\\u2026 it\\u2019s Guang Weiran.\\u201d HA students respond positively with emotional language, such as, \\\"I think the Yellow River is magnificent, like a giant dragon winding through mountains.\\u201d HE students show strong emotional engagement in responses to open questions, such as, \\\"The Yellow River is the symbol of the Chinese people\\u2019s resilience and strength.\\u201d In contrast, LC students display loose and illogical language, e.g., \\\"Uh, she seems, seems like Lu Xun was her teacher, maybe.\\u201d LO students give short, evasive answers, e.g., \\\"Uh\\u2026\\\".\\n- **New Lesson Instruction**: With increasing question complexity, students need to analyze content from the text. HN students struggle with details and give fragmented answers, e.g., \\\"Uh, Lu Xun, uh, when he laughed, uh, his eyes were smiling.\\u201d HA students provide logical and in-depth answers, such as, \\\"I think it describes the powerful flow of the Yellow River, as strong as its surging waves.\\u201d HE students give both structured and expressive responses, e.g., \\\"These descriptions made me feel the deep love he has for his hometown.\\u201d By comparison, LC students' answers are scattered and lack clear opinions, while LO students give brief and monotonous responses, e.g., \\\"The water of the Yellow River is muddy, with lots of twists and turns.\\u201d\\n- **Knowledge Consolidation Phase**: This phase requires students to recall and summarize key points. HN students display nervousness, with filler words like, \\\"Uh, there\\u2019s black soil, uh, there\\u2019s sorghum\\u2026\\u201d HA students summarize clearly, e.g., \\\"The Yellow River symbolizes the greatness and resilience of the Chinese people, inspiring awe.\\u201d HE students demonstrate strong organizational ability, e.g., \\\"This sentence shows the volunteers' perseverance and determination to overcome challenges.\\u201d LC students exhibit repetition and a lack of logic, while LO students demonstrate low confidence and poor knowledge retention, e.g., \\\"Uh, maybe it's because he really loves his hometown?\\u201d\\n- **Class Exercise Phase**: Questions in this phase emphasize the application of knowledge. HN students over-focus on accuracy in closed questions and struggle with coherence in open questions. HA and HE students provide accurate, diverse answers with emotional and personalized insights, while LC students give vague responses, and LO students continue their pattern of brief and incomplete answers.\\n- **Lesson Summary Phase**: Differences in students' summarization abilities become most apparent. HN students give repetitive and vague summaries, e.g., \\\"Uh, we should, uh, love our hometown and land, uh, and protect it.\\u201d HA students offer rich, emotionally resonant summaries, e.g., \\\"Through this lesson, I've gained a deeper understanding of the resilience and warmth of the Chinese people.\\u201d HE students provide comprehensive and personalized conclusions, e.g., \\\"Duanmu Hongliang\\u2019s oath reflects his deep love for his hometown and belief in liberation\\u2014it reminds us that our hometown is always our harbor.\\u201dLC students give overly general summaries, e.g., \\\"We learned Hymn to the Yellow River,\\u201d while LO students avoid engagement, e.g., \\\"Uh, the Yellow River is big and important.\\u201d\\n\\n**2. Supplementary Objective Evaluation Metrics**: As noted in our response to Q1, we incorporated additional metrics based on reviewers' suggestions, including text token, perplexity, TTR, and sentiment analysis. These metrics offered a quantitative view, highlighting differences in language expression across personality traits.\\n\\nFinally, we sincerely thank the valuable suggestion, which motivated us to conduct in-depth analyses and comprehensively demonstrate the modeling effectiveness and behavioral differences of virtual students with diverse personality traits.\"}", "{\"title\": \"Response to Reviewer rtkz's concern about model optimization and computational complexity\", \"comment\": \"***Q7: Do you have plans to optimize the SOE pipeline for efficiency, perhaps through model distillation or response sampling techniques?***\\n\\n**Response**: We sincerely thank the kind suggestion to add a discussion on the scalability and efficiency optimization of the SOE pipeline. This issue is of great importance for the practical application of LVSA in large-scale educational deployments. In the current work, we have already implemented measures to reduce computational overhead. Based on the reviewer's feedback, we have supplemented the discussion section with future strategies, including model distillation and response sampling optimization, to further enhance computational efficiency. \\n\\nBelow is our detailed response.\\n\\n**1. Optimization Measures in the Current Study** \\n- **Utilizing the Swift Framework for Efficient Fine-Tuning**: The Swift framework optimizes fine-tuning efficiency through modular design, enabling rapid adaptation to different tasks while reducing memory usage and computational costs.\\n- **Introducing LoRA (Low-Rank Adaptation)**: LoRA is a lightweight, parameter-efficient fine-tuning method that adjusts only a subset of the model parameters. This approach significantly reduces training and inference resource requirements while maintaining the ability to simulate virtual student behaviors effectively.\\n\\nThese measures have effectively reduced computational resource consumption in this study\\u2019s experiments, providing a solid foundation for future efficiency optimizations.\\n\\n**2. Future Directions for Efficiency Optimization**\\n- **Lightweight Model Design**:\\n - **Task-Specific Re-Finetuning**: Perform further specialized optimization on existing large models to focus on specific educational tasks (e.g., classroom questioning, knowledge consolidation) or particular student behavioral traits. This will enhance the model's ability to handle these tasks, enabling it to perform efficiently with a smaller parameter setup.\\n - **Using Teacher Model Distillation**: Leverage teacher model distillation techniques to transfer knowledge from medium-scale large models to more lightweight models. This approach retains the large model\\u2019s core capabilities in personalized responses and contextual understanding while significantly reducing hardware resource requirements. It enables the development of a lightweight agent model, effectively lowering deployment costs.\\n- **Response Sampling Optimization**: To address computational demands or cost optimization issues in generating personalized responses, we plan to explore response sampling techniques to optimize the generation process. By adjusting sampling strategies during the generation process, such as temperature control and top-k sampling, we can effectively reduce unnecessary computation and improve response speed.\\n - For **closed-ended tasks**, reduce the sampling range to enhance generation efficiency.\\n - For **open-ended tasks**, increase the sampling range to ensure content diversity and complexity.\\n\\nIn Sec 6, we have elaborated on the theoretical foundations and implementation plans for the above optimization strategies, ensuring that readers can understand how these techniques can be practically applied in educational technology. We also explored the potential impact of these strategies in real teaching scenarios.\\n\\nBy implementing these strategies, we aim not only to improve the computational efficiency of the model but also to enhance its usability and applicability in global educational environments. Once again, we sincerely thank the valuable suggestions, which will help us better optimize and expand our educational model in future research.\"}", "{\"title\": \"Response to Reviewer 3QkY's concern about clarification of evaluation metrics\", \"comment\": \"***Q7: What does the percent % mean? Does it refer to the accuracy or any metrics, to measure what?***\\n\\n**Response**: Thanks for the reviewer's attention to the evaluation metrics used in this study. We recognize the importance of clearly conveying the meaning of these metrics for an accurate understanding of our research contributions. Below, we clarify the evaluation metrics and have supplemented the revised version of Section 5 with descriptions related to the calculation of these metrics.\\n\\nThe percentage metrics cited in Section 5.4 specifically represent the proportion of virtual student responses identified as real student responses by evaluators. The detailed explanations for each context are as follows:\\n\\n**1. Percentage in Human Evaluation** (Line 416 from the original manuscript , \\u201c100%\\u201d):\\n\\nThis percentage refers to the proportion of all virtual student-generated responses that were completely mistaken by human evaluators as real student responses. A score of 100% indicates that all simulated responses were successfully perceived as authentic by evaluators.\\n\\n**2. Percentage Across Different Learning Stages** (Line 463 from the original manuscript ):\\n\\nThis percentage reflects the proportion of virtual student responses judged to be real student responses during specific teaching phases (e.g., pre-lesson introduction, new knowledge instruction, consolidation, summary).\\n\\n**3. Percentage Across Different Question Types** (Line 482 from the original manuscript ):\\n\\nThis percentage shows the frequency with which virtual student responses to open-ended and closed-ended questions were identified as real student responses.\\n\\nThese evaluation metrics are used to measure the effectiveness of virtual students in simulating real student behaviors and their adaptability across different teaching scenarios and question types. In the revised version, we have provided more explicit descriptions of these metrics and their calculation methods to ensure accuracy and transparency (See Sec 5).\\n\\nThanks for the valuable comments, which have helped us refine the clarity of our evaluation metrics.\"}", "{\"title\": \"Response to Reviewer 3QkY's Concern About the Pipeline Novelty and Generalization (Part III: The Role of Data in the SOE Pipeline)\", \"comment\": \"***Reviewer's Concern on the Framework Appears as a Data Processing Pipeline***\\n\\nWe understand the reviewer's concern about the SOE pipeline being perceived as a \\\"data processing pipeline\\\" and would like to clarify the core contribution of this framework. While we acknowledge the critical role of data processing within the SOE framework, **it is essential to highlight that data-driven approaches have become foundational to AI development, particularly in the AI for Science domain**, where data design, processing, and validation are integral to scientific applications [3]. **As articulated by Li Feifei's team at Stanford University in their 2022 Nature Machine Intelligence article, AI is shifting from a model-driven to a data-driven approach, marked by the introduction of ImageNet. The creation of suitable datasets and data processing workflows has become one of the greatest challenges in AI development and evaluation [4]**. Hence, the data-driven research paradigm is not only applicable to AI algorithm development but also forms the core logic for AI applications in scientific fields.\\n\\nIn this context, the inclusion of detailed data processing steps in the design of the SOE pipeline reflects the scientific rigor that underpins the pipeline. It is understandable that the reviewer sees our work as a \\\"data processing pipeline.\\\" However, **we emphasize that the core contribution of the SOE framework lies not only in the dataset itself but in the introduction of a data-driven framework that integrates the practical needs of education to propose a novel pipeline and validation method.**\\n\\nIn other words, **the dataset serves as a necessary foundation for creating the experimental environment, not the central contribution of the research.** The design of the SOE pipeline goes beyond simply stitching together various data processing modules. Instead, **it addresses the complexities and diversity of virtual student modeling and evaluation through the modular design of Scene, Object, and Evaluation.** Each module, from data construction to experimental validation, has been carefully designed and tested. The core goal is to operationalize and scientifically model personalized virtual students and multi-dimensional evaluations in educational contexts through an interdisciplinary research platform.\\n\\n[3] Xu, Y., Wang, F., An, Z., Wang, Q., & Zhang, Z. (2023). Artificial intelligence for science\\u2014bridging data to wisdom. The Innovation, 4(6).\\n\\n[4] Liang, W., Tadesse, G. A., Ho, D., Fei-Fei, L., Zaharia, M., Zhang, C., & Zou, J. (2022). Advances, challenges and opportunities in creating data for trustworthy AI. Nature Machine Intelligence, 4(8), 669-677.\"}", "{\"title\": \"Common Concern 3\\uff1aThe Cognitive Levels and Corresponding Ability Evaluation of Virtual Students\", \"comment\": [\"We sincerely thank the reviewers for their concerns regarding the evaluation of the cognitive abilities of virtual students (e.g., problem-solving, reasoning, creative thinking), which are essential to assessing the realism of virtual students and their applicability across diverse educational contexts. We fully agree that these abilities are critical and welcome this opportunity to clarify and expand on how our study addresses these aspects.\", \"**1. Cognitive Ability Modeling Is Reflected in the Experimental Design:**\", \"In this study, we designed the SOE pipeline with an emphasis on the importance of cognitive development in areas like problem-solving and creative thinking. Consequently, we included question types as a key evaluation dimension to reflect the virtual students' ability to operate at different cognitive levels (**see Sec 4.1 & Appendix B.2.2**):\", \"**Lower-Order Cognition Based on Closed-Ended Questions**: Focused on students\\u2019 understanding and memory (e.g., explanatory abilities).\", \"**Higher-Order Cognition Based on Open-Ended Questions**: Focused on application and analysis (e.g., problem-solving and reasoning) as well as creation and evaluation (e.g., creative thinking). We can take an example of our experiment scene (see more examples in **Appendix D.2.2**).\", \"**Problem-Solving Ability**: This aspect primarily focuses on cognitive levels of application and analysis. We designed task-based questions closely related to the content of the lesson, requiring virtual students to engage in reasoning and analysis. For instance, when discussing reading material, HE students were able to provide clear and logical answers based on the text.\", \"**Example Question**: \\u201cDuanmu Hongliang used many vivid words to express his feelings for his hometown. Can you find some examples and discuss them?\\u201d\", \"**Example Response**: \\u201cSure, teacher. For example, \\u201cMy hometown is in the northeast, with its dark, fertile soil, golden grains, red maple leaves, and white snow.\\u201d These descriptions make me feel his deep love for his hometown.\\u201d\", \"**Creative Thinking Ability**: This aspect evaluates students at the evaluation and creation levels of cognition. We designed open-ended questions to encourage students to express creative ideas, such as reinterpreting the theme of a passage or continuing a story. HE virtual students demonstrated proactive responses.\", \"**Example Question**: \\u201cNow, let\\u2019s see whose imagination is the most creative. Based on the text, can you describe the image of the Yellow River in your own words?\\u201d\", \"**Example Response**: \\u201cSure, Teacher. In the poem, the Yellow River is depicted as a great mother who nourishes the entire Chinese land. Its surging waves symbolize the resilience and perseverance of the Chinese people, and its grandeur reflects their greatness and pride.\\u201d\", \"**2. Research Focus on Modeling More Human-Like and Personalized LVSA**:\", \"In educational scenarios, students display diverse traits and behaviors, with varying cognitive abilities. **Rather than focusing solely on advanced capabilities, we prioritize modeling and assessing human-like, diverse virtual student behaviors. The priority lies in evaluating whether the base model can support virtual student modeling in the whole teaching stage and give effective feedback for pre-service teachers.** Although comprehensive analysis of LVSA abilities is not the main focus at this stage, our experimental data suggests that LVSA already shows representative potential in complex cognitive behaviors, such as problem-solving and creative thinking.\", \"In future work, we plan to further expand this aspect through the following steps:\", \"**Refining the Cognitive Evaluation Framework**: Drawing on cognitive taxonomy theories (e.g., Bloom's taxonomy), we aim to design more fine-grained question types to comprehensively cover virtual students' performance in complex cognitive behaviors.\", \"**Analyzing the Relationship Between Personality and Cognitive Abilities**: Investigate how virtual students with different personality traits perform in problem-solving and creative tasks, to further improve the modeling accuracy of LVSA.\", \"**Expanding Multidisciplinary Scenarios**: Extend evaluation tasks to other disciplines (e.g., solving math problems, designing scientific experiments) to test LVSA cognitive simulation capabilities across subject domains.\", \"We greatly appreciate the reviewers' valuable suggestions regarding complex cognitive tasks. Through the evaluation dimension of question types, this study has preliminarily validated the potential of virtual students in problem-solving and creative thinking. In the future, we will conduct further in-depth analysis to continually enhance the simulation capabilities and educational value of LVSA.\"]}", "{\"title\": \"Response to Reviewer 3QkY's Concern About the Pipeline Novelty and Generalization (Part I: Innovation of the SOE Pipeline)\", \"comment\": \"We sincerely thank the reviewer for the thorough reexamination of our research. Your deep insights into the intersection of artificial intelligence and educational sciences have provided invaluable suggestions. Your decision to raise our score from 3 to 5 is greatly encouraging to our team. **We highly value your feedback and have taken extra time to reorganize our responses to address your concerns in detail. We believe that the complexity of interdisciplinary research implies that researchers from diverse academic backgrounds may interpret \\\"novelty\\\" differently.** This has motivated us to refine our presentation to better align with a broad academic context and diverse research needs. Your feedback has been instrumental in helping us clarify the contributions of this study and precisely articulate its innovations, demonstrating its potential for future applications.\", \"below_is_our_detailed_response_to_your_comments\": \"**1. Innovation of the SOE Pipeline**\\n\\nRegarding the innovation concerns raised by the reviewer, we fully understand and appreciate your feedback. You referred to the \\\"model novelty and generalization\\\" and considered the SOE pipeline as a traditional AI model. However, we would like to clarify that, unlike traditional AI models that primarily focus on algorithmic innovation, this study is situated within the AI for Social Science domain. **Research in this area particularly requires the integration of interdisciplinary theories with AI agents to develop more human-like research pipelines [1]. The inclusion of these theories is not only significant for data construction but also enhances the model's interpretability, fairness, and generalization ability [2], offering a foundation upon which future research can build.**\\n\\nThis work follows this paradigm to address the challenges of modeling and evaluating virtual students by proposing the SOE pipeline. The core innovation of this pipeline lies not in algorithmic advances but in **its combination of interdisciplinary theories to provide a practical research paradigm. This includes the introduction of theoretical perspectives and the proposal of a mixed-method approach for evaluating the simulation quality of large language models, combining both subjective and objective evaluation methods.** This approach supports future research in virtual student construction, role modeling, and educational/social simulation experiments. Our pipeline integrates experimentally validated theories and methods across the design, modeling, and evaluation stages. **Furthermore, the modular design ensures the framework's scalability and generalization across diverse educational and social simulation environments.**\\n\\nIn the subsequent response box, we will address each of your comments on the novelty of our work in detail.\\n\\n[1] Xu, R., Sun, Y., Ren, M., Guo, S., Pan, R., Lin, H., ... & Han, X. (2024). AI for social science and social science of AI: A survey. Information Processing & Management, 61(3), 103665.\\n\\n[2] Radford, J., & Joseph, K. (2020). Theory In, Theory Out: How social theory can solve problems that machine learning can\\u2019t. arXiv preprint arXiv:2001.03203.\"}", "{\"title\": \"Response to Reviewer rtkz's concern about the details of LC LVSA\", \"comment\": \"***W4&Q4: Could you elaborate on the specific challenges faced and any steps taken to mitigate these problems?***\\n\\n**Response**: We sincerely thank the reviewer's attention to the modeling of Low Conscientiousness (LC) personality traits, as well as the challenges and potential future developments associated with it. During the research process, we placed significant emphasis on analyzing bad cases for the LC personality (**Sec 5.6**) and provided relevant examples (**Appendix D.7.1**). **The careful selection and modeling of the LC personality reflect this study's commitment to advancing the capabilities of large language models from an \\\"Education for AI\\\" perspective.**\\n\\nBased on the reviewer's feedback, we have thoroughly discussed these challenges and proposed corresponding solutions to enhance the model\\u2019s accuracy and practicality. These analyses have been incorporated into the revised version of the paper in **section 5.6 and Appendix D.7.1.** \\n\\nBelow is our detailed response.\\n\\n**1. Challenges in Modeling the LC Personality in the Current Study**\\n\\n- **Data Sparsity and Lack of Diversity**: \\nBehavioral manifestations of LC personality traits are relatively sparse in existing datasets, making it challenging for the model to accurately capture and simulate the complexity of such traits. Additionally, due to the limited data, the model may struggle to effectively reflect the typical behaviors of LC personalities in specific contexts.\\n- **Model Hallucination Issues**: \\nWhen simulating LC personalities, the model occasionally generates hallucinated content that does not align with the context or personality traits. These hallucinations may include repetitive language, factual errors, and other issues that affect the quality and reliability of the model's output. Detailed examples of task incompletion due to repetitive language and factual errors are provided in Appendix D.7.1.\\n- **Ethical and Safety Considerations**: \\nSince LC personality traits may involve negative or non-educational-standard behaviors (e.g., task avoidance, neglecting details), the model must balance the authenticity of personality traits with adherence to ethical standards. During modeling, this may trigger the ethical safety mechanisms of large language models, further increasing the complexity of modeling.\\n\\n**2. Measures Taken in the Current Study**\\n- **Explicit Prompt Design for Personality Traits (see Appendix B.2.2)**: \\nDuring the data generation phase, we incorporated detailed descriptions of LC personality traits into the prompts, along with relevant examples, to help the model better understand and generate behavior consistent with these traits.\\n- **Context-Driven Generation (see Appendix C.2.2)**: \\nDuring data generation, we integrated specific teaching scenarios and task requirements to guide the model in producing content that is more task-relevant and context-appropriate, reducing the likelihood of irrelevant or hallucinated information and improving task alignment.\\n- **Validation and Optimization of Outputs (see Appendix C.1.2)**: \\nIn the post-generation phase, we conducted rigorous validation of the outputs, including consistency checks for LC personality traits and semantic optimization, to ensure that the generated content aligns with the target personality traits and meets the requirements of the teaching tasks.\\n\\n**3. Future Optimization Directions**\\n- **Enhancing Data Diversity and Representativeness**: \\nBy designing and collecting more teaching scenarios and tasks that reflect LC personality traits, we aim to enrich the training data and improve the model's ability to simulate this type of personality.\\n- **Optimizing Personalized Modeling Mechanisms**: \\nWe plan to explore and implement more precise personalized modeling techniques, such as personalized fine-tuning or constrained modeling methods, to ensure the model can accurately simulate the behaviors of various complex personalities.\\n- **Strengthening Monitoring and Correction of Hallucinated Content**: \\nWe aim to develop dedicated mechanisms for detecting and correcting hallucinated content, such as using natural language processing techniques to identify and rectify inconsistent or illogical outputs, thereby enhancing the reliability and practicality of the model's outputs.\\n\\nThe above content is also presented in the bad case section of the paper and its accompanying appendix materials. Through these additions, we hope that the findings of this study can inspire researchers to pay closer attention to the limitations of existing large language models in modeling specific personality traits. By providing concrete examples and optimization suggestions, we aim to help address the challenges encountered in simulating LC personality traits, further enhancing the practical value and educational effectiveness of the LVSA model. Once again, we sincerely thank the reviewer's suggestion.\"}", "{\"title\": \"Response to Reviewer 3QkY's concern about overstatements regarding LVSA's importance for pre-service teacher training\", \"comment\": [\"***Q8: The assertions like 'These findings suggest that LVSA effectively adapts to different learning stages, offering comprehensive support for pre-service teacher training. Virtual students enable teachers to practice and refine instructional strategies across all teaching phases, enhancing skill development throughout the entire instructional process.' in paper sound weak.***\", \"**Response**: Thanks for the reviewer's attention to our conclusions regarding the role of LVSA in pre-service teacher training. We understand your concerns that some statements might appear exaggerated. Specifically, **the previous statement emphasizes the study's conclusions across different teaching stages, while the following statement highlights the broader significance of these findings.** We have adjusted the expressions in the revised manuscript (**see Sec 5.4 & Sec 6**) to specify that these findings were derived from controlled experimental settings rather than real teaching experiments to avoid potential misunderstandings.\", \"Below, we further clarify the basis of these conclusions.\", \"**1. Basis of the Research Conclusions**\", \"**Validation Through Multi-Stage Teaching Tasks**\", \"**Experimental Design**: This study designed tasks covering different teaching stages (e.g., pre-lesson introduction, new knowledge instruction, knowledge consolidation, classroom practice, and summary) to validate LVSA's adaptability in supporting pre-service teachers across the teaching process. This aligns with the statement that \\u201cLVSA effectively adapts to different learning stages.\\u201d\", \"**Results**: During these tasks, LVSA successfully simulated diverse personality traits and behaviors (e.g., HE, HN), providing pre-service teachers with opportunities to practice interacting with different types of virtual students. Experimental data indicate that LVSA effectively mimics complex student behaviors often encountered in real classrooms, supporting the statement \\u201coffering comprehensive support for pre-service teacher training.\\u201d Here, \\u201ccomprehensive\\u201d emphasizes the breadth of teaching stages rather than overall applicability.\", \"**2. Feedback from Pre-Service Teachers**\", \"**Method**: Semi-structured interviews were conducted with pre-service teachers after the experiments to collect their feedback (see **Appendix D.5.1**).\", \"**Feedback Content**: Teachers generally reported that LVSA provided diverse teaching scenarios, allowing them to practice interacting with different types of students. For instance, some teachers mentioned that LVSA helped them adjust and refine their teaching skills based on the varying developmental levels of students and strengthened their pedagogical content knowledge (from evaluator 1,4). Others noted that \\u201cVirtual students simulate authentic student feedback, allowing pre-service teachers to practice in a close-to-real environment, thereby improving their teaching skills and classroom management abilities\\u201d (from evaluator 6).\", \"**3. Recognition from Peer Reviewers**\", \"Several reviewers also acknowledged the importance of this study in supporting pre-service teacher training, further affirming LVSA\\u2019s potential application value:*\", \"**Reviewer yWBj mentioned**: \\u201cThis paper can greatly impact teacher training and educational AI systems. By focusing on virtual students, it provides a fresh and creative problem formulation, opening new avenues in AI4Education research.\\u201d\", \"**Reviewer wxDr commented**: \\u201cApplying virtual students for teacher training can be potentially impactful.\\u201d\", \"**Reviewer rtkz noted**: \\u201cThe significance of this work is substantial, as it addresses an important gap in pre-service teacher training by providing a tool for realistic, student-like simulations. In traditional teacher training, access to diverse student interactions is often limited, and this work offers a scalable solution to that problem.\\u201d\", \"**4. Clarifying the Scope of the Conclusions**\", \"**Nature of the Study**: The conclusions of this study are based on simulated teaching tasks and initial feedback, and we do not claim that LVSA has been fully validated in real teaching scenarios. We have added a discussion on this point in the revised manuscript and identified it as a key direction for future research (see Sec 6) .\", \"**Future Plans**: We plan to conduct real teaching experiments and long-term studies in subsequent work to further validate LVSA\\u2019s value in supporting pre-service teacher skill development. This includes designing interactive experiments in real classroom settings and observing the trajectory of teacher skill improvement through longitudinal studies.\", \"Thanks again for the reviewer's valuable comments on the conclusions of this study. These insights not only allow us to clarify the value of our research but also provide important guidance for future directions.\"]}", "{\"title\": \"Response to Reviewer 3QkY's concern about the lack of public datasets evaluation on the SOE pipeline\", \"comment\": \"***Q3: This work only uses one Chinese dataset from the authors. No public datasets or English datasets are used.***\\n\\n**Response**: Thanks for the reviewer's concern regarding the use of a single dataset and the generalization ability of the SOE pipeline. We understand and respect your view on the importance of data diversity and comprehensive evaluation. **However, as this study is the first to systematically explore the use of fine-tuning techniques for virtual student modeling with LLMs, there are currently no publicly available datasets suitable for this task.**\\n\\nBelow, we will explain in detail the research methods we chose and our plans.\\n\\n**1. Use of a Single Dataset and Its Challenges**\\n\\nOur study mainly used a self-constructed Chinese dataset due to the novelty of the research question. **This innovative perspective also leads to a lack of publicly available datasets that meet the specific needs of virtual student modeling.** Although this limits our ability to directly verify the generalization ability of the SOE pipeline on other datasets, this choice was made based on the following considerations:\\n\\n- **Data Suitability**: Existing public education datasets mainly focus on traditional teaching interactions and lack detailed annotations of the diverse behaviors of virtual students. This is insufficient for our study of specific behaviors and response patterns of virtual students.\\n- **Innovation in Data Generation**: To overcome the limitations of existing datasets, we adopted a method using GPT-4 to simulate diverse student behaviors and dialogues. This not only provides a broader range of behavioral characteristics but also ensures high data quality and experimental controllability. This method has already been widely used in fine-tuning research for large models [2-3].\\n\\n**2. Generalization Ability of the SOE Pipeline**\\n\\nAlthough this study primarily relies on a Chinese dataset, the design of the SOE pipeline inherently supports adaptation to multilingual and multicultural environments as noted in common concern 1. The following key features demonstrate its generalization potential:\\n- **Modular Design**: The SOE pipeline\\u2019s modular structure\\u2014comprising Scene, Object, and Evaluation\\u2014allows us to flexibly adjust the models and data processing methods according to the requirements of different educational scenarios and disciplines. This design facilitates future extensions to datasets in other languages, including English.\\n- **Cross-Cultural Adaptability**: Although the current research focuses on the Chinese context, the pipeline's structural design enables easy adaptation to different cultural and educational systems. In the future, we plan to introduce multilingual datasets to validate and optimize the adaptability and effectiveness of the SOE pipeline in diverse cultural contexts.\\n\\n**3. Future Research Directions**\\n\\nBased on reviewer valuable feedback, we plan to broaden the diversity and coverage of datasets in future research, including:\\n- **Incorporating Public Datasets**: We will explore using existing multilingual education datasets and adapt them as necessary to support cross-cultural research.\\n- **Multi-Scenario Validation**: By applying the SOE pipeline to different educational scenarios and disciplines, we aim to further test and optimize its generalization ability and practical value.\\n\\nThanks again for the suggestion. We look forward to enhancing the generalization ability of our research through these improvements and expansions, while better addressing the need for data diversity and model adaptability in the educational technology field.\\n\\n[2] Wang, Y., Kordi, Y., Mishra, S., Liu, A., Smith, N. A., Khashabi, D., & Hajishirzi, H. (2022). Self-instruct: Aligning language models with self-generated instructions. arXiv preprint arXiv:2212.10560.\\n\\n[3] Rohan Taori et al. Stanford Alpaca: An Instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca. 2023.\"}", "{\"title\": \"Adjusted Score\", \"comment\": \"I'd like to say thanks to the authors for the comprehensive improvement. I just checked other responses. Although my concern about model novelty and generalization is not solved yet, other concerns such as automatic evaluation and clarification are well-addressed. Therefore, I'd like to increase my score from 3 to 5.\\n\\nGood luck!\"}", "{\"title\": \"Response to Reviewer wxDr's concern about ethical issues and data privacy\", \"comment\": \"***Q5: Could the authors elaborate on the ethical implications of simulating student behaviors (data privacy could be a concern)?***\\n\\n**Response**: We appreciate the reviewer's attention to the ethical issues of virtual students, particularly regarding data privacy. Ensuring that virtual student research adheres to high ethical standards and data protection regulations is a core principle in the design and implementation of this study. In this project, we have implemented multiple measures to ensure that all data usage strictly complies with ethical and privacy protection standards. Additionally, we have included this information in the revised **Appendix D.8** to help readers better understand the compliance and security aspects of this work. \\n\\nBelow is our detailed response.\\n\\n- **Security of Generated Data**: The datasets used during the fine-tuning process were generated by large language models and are entirely artificial, solely intended for scientific research. These datasets are specifically designed to exclude any personally identifiable information, fundamentally eliminating the risk of privacy breaches.\\n- **Legitimacy of Real Data**: The real student data used in this study is sourced from public and authoritative national websites. These resources comply with relevant laws and regulations, ensuring their openness and the legality of their use.\\n- **Generality of Behavioral Simulation**: We ensured that the behaviors generated by virtual students are not targeted at any specific individuals but are abstractly modeled based on a wide range of educational scenarios. This approach guarantees the generality and anonymity of the generated data, avoiding direct associations with real individuals.\\n- **Compliance with Ethical Standards in Existing Research**: Our study strictly adheres to the ethical standards established by prior work and references the following studies in the data usage process [6]. Additionally, we employ advanced data security techniques to protect the data during storage and processing, such as encryption and access control mechanisms, ensuring that only authorized researchers can access the relevant data. By following scientific and compliant practices, we effectively mitigate potential privacy risks.\\n\\nIn future work, we will continue to strengthen the ethical and privacy protection measures for the virtual student model, especially as we incorporate multimodal data and expand into multidisciplinary applications. We plan to adopt advanced privacy protection technologies, update ethical guidelines, and enhance research transparency to ensure that all research activities adhere to the highest ethical standards. This will provide a solid foundation for the safe and responsible application of technology in the education field.\\n\\nIn conclusion, we sincerely thank the reviewer for raising the important issues of ethics and privacy. We have thoroughly considered these aspects during our experiments and are committed to continuing to address and optimize them in future research. Through these measures, we aim to provide a solid foundation for the safe and responsible application of virtual student technology in the education field.\\n\\n[6] Yizhong Wang et al. \\u201cSelf-Instruct: Aligning Language Model with Self Generated Instructions\\u201d. In: arXiv preprint arXiv:2212.10560 (2022).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer 3QkY's concern about the lack of innovation or contribution in the SOE pipeline (Part II)\", \"comment\": [\"**2. Application of the Big Five Personality Traits and Innovation**\", \"The Big Five personality theory, widely used in generative AI research, was adopted for its scientific framework to systematically simulate adolescent students' personality traits. **This approach enhances research generalizability and reduces biases in personality modeling**. Below are the reasons for selecting and applying the Big Five traits in this study:\", \"**Reasons for Choosing the Big Five Personality Theory**:\", \"**Universality and Scientific Basis**: The Big Five personality theory is one of the most widely accepted personality theories in psychology. It provides a comprehensive and structured framework for analyzing individual differences. Its universality makes it an ideal tool for understanding human behavior in interdisciplinary research.\", \"**Alignment With Educational Psychology**: Compared to other psychological theories, such as Cattell\\u2019s personality theory, which is overly complex, or Maslow\\u2019s hierarchy of needs, Allport\\u2019s value theory, and Atkinson\\u2019s achievement motivation theory, **which primarily focuses on psychological factors other than personality traits**, the Big Five personality theory can be more directly applied to simulate student behavior and interactions in educational scenarios.\", \"**Strategies for Selecting Personality Traits**:\", \"**Goal-Oriented Trait Selection**: Among the ten high and low levels of traits within the Big Five framework, **we did not make random selections. Instead, we carefully selected five traits with high distinguishability and strong relevance to educational psychology**. This selection emphasizes traits that significantly influence student learning behavior and classroom interactions.\", \"**Challenges and Opportunities**: **By focusing on specific traits, such as low conscientiousness (LC), we explored and addressed challenges in simulating these complex traits in virtual environments**, such as hallucination issues and ethical safety triggers. These efforts not only reveal current limitations in LLM-based role modeling but also point to future improvement directions.\", \"Our research applies recognized psychological theory and introduces an innovative method for modeling and evaluating virtual students, blending educational principles with AI technology. This interdisciplinary approach fosters personalized, adaptive education and marks a significant scientific advancement.\", \"In Summary, **the primary difficulty and core innovation of our work lie in utilizing AI technologies within interdisciplinary research to address the challenges of modeling and evaluating virtual students in educational scenarios.** Although we employed existing LLMs for fine-tuning, our innovation lies in **how these technologies were applied to specific educational psychology scenarios, particularly in simulating complex personality traits and educational interactions**. The modular design of the SOE pipeline allows flexible adjustments to accommodate diverse educational needs, which is uncommon in traditional LLM applications. Additionally, while we leveraged existing technologies for experimentation, the key innovation lies in our approach to accurately simulate and evaluate students with different personality traits and enhance the model\\u2019s educational applicability and psychological fidelity through fine-tuning strategies.\", \"We trust this clarification underscores the SOE framework's innovative aspects and the intricate challenges and novel solutions of our interdisciplinary research. We appreciate your feedback and are committed to further advancing educational technology through ongoing research and innovation.\"]}", "{\"title\": \"Response to Reviewer 3QkY's concern about the effectiveness of GPT-4 in automatic evaluation\", \"comment\": [\"***Q5: line 373 cannot prove that GPT 4 is a convincing automatic evaluation tool to replace real human annotators to perform automatic evaluation.***\", \"**Response**: Thanks for raising questions about the validity of using GPT-4 for automatic evaluation. To ensure the scientific rigor and reliability of our experimental results, the design process of our experiments incorporates social science research paradigms, which may have caused some misunderstanding regarding the consistency analysis in our evaluations.\", \"Below, we will clarify all consistency-related experiments in our study and explain why GPT-4 can serve as a reliable tool for large-scale automatic evaluation.\", \"**1. Consistency Analysis in Evaluations**\", \"Our study includes three levels of consistency experiments, involving both human evaluators and GPT-4, to ensure scientific and objective evaluation results:\", \"**Consistency Among 10 Human Evaluators**:\", \"**Objective**: To verify whether the assessments of \\\"whether virtual students exhibit human-like traits\\\" by 10 human evaluators are consistent, eliminating randomness.\", \"**Method**: Fleiss\\u2019s Kappa coefficient was used for statistical analysis, proving the consistency among human evaluators and ensuring the reliability of evaluation conclusions.\", \"**Consistency Between Two Content Coders**:\", \"**Objective**: To extract scientifically grounded evaluation dimensions used in GPT-4 prompts by coding the evaluation process of human evaluators.\", \"**Method**: Two coders were invited to analyze the interview data of the evaluators, and their consistency was calculated to prevent personal bias in coding the evaluation dimensions (a step consistent with the content analysis method in social science research).\", \"**Outcome**: 4 primary and 15 secondary evaluation dimensions were extracted to design the GPT-4 prompts, ensuring that GPT-4\\u2019s evaluations were scientifically based.\", \"**Consistency Between 10 Human Evaluators and GPT-4**:\", \"**Objective**: To verify whether GPT-4's automatic evaluation aligns with human evaluators, thus assessing its reliability as an evaluation tool.\", \"**Method**: A Fleiss\\u2019s Kappa analysis was conducted to compare human evaluators\\u2019 and GPT-4\\u2019s results, showing a high degree of consistency. This confirms GPT-4\\u2019s applicability as a tool for large-scale evaluations to improve efficiency.\", \"**Clarification**: The reviewer may have misunderstood that we only analyzed coder consistency. In fact, GPT-4\\u2019s evaluation capability was validated against 10 human evaluators, with a consistency score confirming its suitability for large-scale evaluations. (Details on Fleiss\\u2019s Kappa are provided in Response to Q6.)\", \"**2. Reasons for Using GPT-4 for Automatic Evaluation**\", \"**Evaluation Needs**: The complexity of educational scenarios and the diversity of language tasks make defining a single ground truth impossible, mirroring the complexity of real student behavior. This underscores the need for the SOE pipeline and a tool to process large-scale data efficiently and provide rapid feedback.\", \"**Scientific Basis**: GPT-4 has demonstrated high efficiency and reliability in multiple studies on language consistency, fluency, and sentiment analysis [5-7]. These attributes make it an ideal tool for large-scale evaluations.\", \"To ensure the scientific rigor of our evaluation approach, we also employed additional methods as supplements:\", \"**Multi-Dimensional Evaluation Strategy**: By combining GPT-4\\u2019s automatic evaluation with subjective judgments from human evaluators, we developed a comprehensive framework to evaluate virtual student behavior. This combination enhances both the efficiency and the scientific rigor of the evaluations.\", \"**Supplementing with Objective Metrics**: To address the potential limitations of automatic evaluation, we have added objective metrics such as perplexity, type-token ratio (TTR), and sentiment analysis (**see Sec 5.5 & Appendix D.6**). These further validated the reliability and comprehensiveness of the evaluation results.\", \"We hope these explanations address your concerns about GPT-4's effectiveness in automatic evaluation and demonstrate the rigor of our experimental design. Combining human evaluators with automated tools ensures precise and comprehensive assessments. Thanks for the insightful comments and suggestions.\", \"[5] Fabbri, A. R., Kry\\u015bci\\u0144ski, W., McCann, B., Xiong, C., Socher, R., & Radev, D. (2021). Summeval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics, 9, 391-409.\", \"[6] Huang, Y., Bai, Y., Zhu, Z., Zhang, J., Zhang, J., Su, T., ... & He, J. (2024). C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. Advances in Neural Information Processing Systems, 36.\", \"[7] Mao, R., Chen, G., Zhang, X., Guerin, F., & Cambria, E. (2023). GPTEval: A survey on assessments of ChatGPT and GPT-4. arXiv preprint arXiv:2308.12488.\"]}", "{\"title\": \"Response to Reviewer 3QkY's concern about the lack of innovation or contribution in the SOE pipeline (Part I)\", \"comment\": [\"***Q1: The proposed SOE framework looks more like a dataset processing and LLM testing pipeline. Moreover, the incorporation of Big Five theory for agent personality is also widely utilized in existing work.***\", \"**Response**: Thanks for the question. In addressing these concerns, we have placed special emphasis on the interdisciplinary nature and challenges of this study, **particularly its integration of education, psychology, social sciences, and AI technologies**.\", \"We would like to further elaborate on the core contributions of the SOE framework, which have also been recognized by reviewers yWBj and rtkz. Reviewer yWBj noted that \\u201c*The paper presents a novel shift from teacher-centric AI systems to virtual student simulations and provides a fresh and creative problem formulation, opening new avenues in AI4Education research.*\\u201d Similarly, reviewer rtkz stated that \\u201c*This paper brings a high degree of originality by shifting the focus from expert or teacher-oriented AI simulations to student simulations. The contributions align well with the ICLR community\\u2019s interest in novel applications of AI, particularly in education and social simulation, and could stimulate further research into AI-driven, personalized learning simulations.*\\u201d\", \"**Below, we detail how the SOE framework employs social science research methodologies alongside AI technologies to tackle the complexities of virtual student simulations and their evaluations**.\", \"**1. Research Challenges and Core Contributions of the SOE Framework**\", \"This study transcends mere experimental validation of LLMs by addressing the significant challenge of modeling and evaluating virtual students through the innovative SOE (Scene-Object-Evaluation) framework. Unlike traditional LLM applications, this framework integrates social science theories, such as the Big Five personality traits and student cognitive development theories, with advanced AI fine-tuning techniques like LLM LoRA. This integration is critical for simulating personalized and human-like behaviors of virtual students throughout the entire educational process. **Our research necessitates a profound understanding of these social theories and the precise translation of complex psychological and behavioral concepts into computable data for algorithmic processing.** By adopting a multi-disciplinary approach, we have refined our research methodology to incorporate diverse academic perspectives effectively.\", \"**In-Depth Thinking Before Research Begins**:\", \"**Careful Selection of Validation Scenarios**: Language learning was selected due to its observable, open, and diverse nature, making it ideal for validating the SOE pipeline's theoretical foundation and practical applicability.\", \"**Analyzing the Challenges of Modeling Virtual Students**: Student behaviors are highly diverse and open-ended, raising critical questions such as:\", \"What diverse personality traits do students exhibit?\", \"What cognitive development stages do students undergo during learning?\", \"How do students with different personality traits behave?\", \"**Interdisciplinary Approach to Modeling Pipeline Development**:\", \"The SOE pipeline integrates social sciences and AI to propose a modular framework bridging conceptual and operational theories. Its modular design (Scene-Object-Evaluation) supports diverse functionalities, as noted in common concern 1.\", \"**To tackle interdisciplinary evaluation challenges, we combined methodologies from social sciences (empirical analysis), psychology (surveys), education research (structured interviews and coding), LLM studies (subjective evaluation), and NLP (objective metrics) to develop a comprehensive evaluation system:**\", \"**Challenges in Evaluating Virtual Students**: Spontaneous and varied student language lacks fixed ground truth, making traditional accuracy-based methods unsuitable.\", \"**Systematic Subjective Evaluation**: Expert interviews and content analysis encoded evaluation dimensions, ensuring consistency and rigor, while GPT-4 enabled efficient large-scale evaluations.\", \"**Objective Evaluation Supplements**: Incorporating metrics like text length, perplexity, TTR, and sentiment analysis validated subjective evaluation results from multiple perspectives. However, these metrics have limitations in capturing the complexity of student language behaviors in educational scenarios. These limitations underscore the necessity of incorporating subjective evaluation methods.\"]}", "{\"comment\": \"Thanks for the reply. It does not make sense to me as well.\\n\\nFirst, I indeed did not see the unique novelty of the proposed dataset. In my view, it is just similar to most education datasets. Maybe the language is different in Chinese. However, there are also lots of Chinese datasets as well. \\n\\nMoreover, to really demonstrate the generalization ability, it is always more convincing to run a real experiment on additional datasets, instead of simply putting it into the future work.\"}", "{\"title\": \"Response to Reviewer 3QkY's concern about evaluation of other datasets\", \"comment\": \"We sincerely appreciate the reviewer's inquiry. We fully understand your concern regarding the absence of validation using multiple datasets. Indeed, multi-dataset validation is a crucial step in assessing the generalization and robustness of AI algorithm models. **However, the SOE pipeline proposed in this study is not a traditional AI algorithm model, but rather an operational framework designed for application in educational internships and social simulation experiments.**\\n\\nAt the early stages of our research, we actively sought publicly available datasets. However, **datasets containing teacher-student dialogues that align with the Big Five personality framework are extremely rare.** Existing Chinese dialogue and educational datasets are not relevant to the experimental scenarios of this study, making them unsuitable for direct application in our experiments.\\n\\nFurthermore, we believe that cross-dataset generalization tests are more focused on evaluating the language transferability of large language models, a capability that has already been extensively studied and validated. **The core challenge we aim to address is the development of hierarchical understanding capabilities in virtual students across different scenarios. This hierarchical understanding requires virtual students to move from superficial comprehension to contextual interpretation and, ultimately, to a deeper understanding of the emotional themes expressed in the text.** We have already conducted systematic experimental validation using our custom-built dataset, demonstrating the hierarchical understanding abilities of virtual students in text comprehension tasks and across different teaching stages. **This validates that the SOE pipeline can enable personalized, hierarchical understanding.** For datasets in different languages, the language transferability of large language models can further optimize this capability.\\n\\nA detailed response to this concern is provided in **Parts V and VI of the rebuttal summary box**. We again thank the reviewer for the thoughtful question.\"}", "{\"title\": \"Response to Reviewer 3QkY's Concern About the Pipeline Novelty and Generalization (Part II: Reasoning Ability)\", \"comment\": \"***Reviewer's Concern on Lack of Breakthrough in LLM\\u2019s Reasoning Ability***\\n\\nWe appreciate the reviewer's attention to our work, particularly the thoughtful consideration of LLM reasoning abilities. We fully understand and agree with the importance of reasoning in large language models and virtual student modeling. In this study, **reasoning ability is systematically integrated into the SOE framework across various modules**. \\n\\nBelow, we elaborate on how each module of the SOE pipeline addresses reasoning ability:\\n\\n - **Scene Module (Subject Scene Selection and Reasoning Validation):**\\n\\nWhen designing the Scene module, we carefully considered the selection of subject areas. Unlike fields such as STEM, where reasoning is based on objective, rule-based facts, language-based disciplines emphasize reasoning grounded in semantics and contextual understanding (text comprehension). **This reasoning process is often facilitated by teachers through guided questioning at various stages of instruction, reflecting the Socratic method, where continuous questioning and dialogue aim to enhance students' reasoning abilities.** In contrast to STEM subjects, where fixed answers exist, language scenarios see students with different personality traits providing varied interpretations and expressions. **This aligns with the goal of this study to model personalized and human-like virtual students and is a significant manifestation of the challenges in modeling and evaluation.** Therefore, in this module, we preliminarily validated the reasoning ability of base models through text comprehension tasks and selected four base models suitable for this educational context.\\n\\n - **Object Module (Instruction Fine-Tuning Data and Reasoning Representation):**\\n\\nIn the Object module, we constructed a scientifically grounded teacher-student dialogue fine-tuning dataset within a reasonable theoretical framework. **A key focus of this process was the transition from surface-level imagery to deep metaphorical reasoning, an essential element in language-based contexts.** For example, in the dialogue example shown in Figure A8, the teacher asks, \\u201cIn the final paragraph, 'The two goldfish are still swimming, and the chrysanthemums are blooming,' what do you think the chrysanthemums and the goldfish symbolize?\\u201d Here, \\u201cchrysanthemums\\u201d and \\u201cgoldfish\\u201d serve as surface imagery, requiring students to use context and the teacher\\u2019s explanation to infer their deeper metaphorical significance. This process represents the embodiment of reasoning ability based on text comprehension. In constructing the dataset, we not only focused on students\\u2019 basic understanding but also systematically built the incremental development of their reasoning abilities. **Teacher questions across different teaching stages and problem types guided students from shallow cognition to deeper reasoning, providing virtual students with a rich array of language and reasoning scenarios.**\\n - **Evaluation Module (Fine-Grained Evaluation and Reasoning Validation):**\\nIn the Evaluation module, **fine-grained assessments across different teaching stages and problem types further validated the reasoning abilities of virtual students.** For example:\\n - **Pre-Lesson Introduction Phase**: Virtual students mainly provide simple answers based on understanding and memory. For instance, when asked, \\\"Today, we\\u2019re going to study Shi Tiesheng\\u2019s Autumn Thoughts. Does anyone know who Shi Tiesheng is?\\\" a student responds, \\\"Shi Tiesheng, he's a, a writer, right, right, he wrote I and the Temple of Earth.\\\" Such responses validate the virtual students\\u2019 basic grasp of the text and background knowledge.\\n - **Class Practice Phase**: The questions are designed to encourage deeper reasoning based on text comprehension. For example, when asked, \\\"The two goldfish are still swimming, and the chrysanthemums are blooming, what do you think the chrysanthemums and the goldfish symbolize?\\\" The student responds, \\\"Um, the chrysanthemums probably represent the mother, the mother's love. And the goldfish, maybe they represent, uh, life, life continuing.\\\" These responses require students to engage in deeper reasoning and metaphorical interpretation based on context.\\n \\nThrough this fine-grained evaluation, **we can clearly verify whether virtual students can meet the cognitive demands of different teaching stages and provide more appropriate feedback for pre-service teachers.** This also highlights the generalization capabilities of the SOE pipeline in educational contexts.\"}", "{\"title\": \"Response to Reviewer 3QkY's concern about limitations of the Turing Test and the absence of ground truth comparisons\", \"comment\": \"***Q2: Such replication needs labels/ground truth to demonstrate and quantify the accuracy of such replication.***\\n\\n**Response**: Thanks for the insightful question regarding the limitations of the Turing Test and the absence of ground truth comparisons in this study.\\n\\nBy \\\"simulating real student behavior,\\\" we refer to generating responses that mimic the cognitive and emotional reactions of real students in educational scenarios, rather than perfectly replicating their exact behaviors. **Unlike traditional ground truth comparisons, our focus is on assessing whether virtual students can effectively engage in and contribute to educational processes, which has been added in Sec 5.2.**\\n\\nConsidering the potential confusion caused by the term \\\"replicate\\\" in the phrase from Reviewer 053, \\\"virtual student agents must replicate a broader range of human behaviors,\\\" we have revised the wording from \\\"replicate\\\" to \\\"simulate.\\\" (**see Sec 1**)\\n\\nBelow, we provide a detailed response.\\n\\n**1. Role of the Turing Test in This Study**\\n- **Turing Test for Validating Human-Likeness**: The Turing Test in this study is not intended to prove the authenticity of virtual student behavior but serves as an evaluation method to verify whether virtual students achieve a human-like level in behavior and language generation.\\n- **Related Work as Reference**: The Stanford Village study [1] assessed generative agents in a complex virtual environment using the Turing Test and a multi-dimensional subjective evaluation, without relying on fixed ground truth. Similarly, our study employs the Turing Test to evaluate whether virtual students' language generation and personality traits approximate human interactive behavior across multiple dimensions.\\n\\n**2. Lack of Ground Truth in This Study**\\n\\nDue to the complexity of language-based educational environments, we did not use fixed ground truth for evaluation. Instead, a multi-dimensional approach combined subjective methods (e.g., Turing Test) with objective metrics (e.g., perplexity, type-token ratio, sentiment analysis) to assess virtual student performance comprehensively.\\n- **Challenges in Defining Ground Truth in Language-Based Experiments**: Language-based experimental scenarios naturally involve diversity and openness, making it difficult to define fixed answers or behavior patterns as ground truth compared to science-based tasks (e.g., mathematical calculations or solving physics problems).\\n- **Alignment With Mainstream Practices in Agent Evaluation**: Similar to the Stanford Village study [1], we did not use fixed ground truth, relying on multi-dimensional methods like Turing Tests and human subjective evaluations to assess generative agents. To address the limitations of these approaches, we incorporated objective metrics such as perplexity, type-token ratio, and sentiment analysis to strengthen our conclusions.\\n\\n**3. Real Students in Appendix Figure A18**\\n\\nWe understand the reviewer\\u2019s concern regarding the experimental design involving real students in Appendix Figure A18 and would like to provide further clarification:\\n\\n- The inclusion of real student responses in Appendix Figure A18 was intended to test whether evaluators could distinguish between virtual and real student responses without explicit labels. This was not meant to treat real students as an additional category but rather to increase the difficulty of the experiment, testing whether the virtual students\\u2019 responses were sufficiently human-like to be indistinguishable from those of real students.\\n- **This setup reflects our effort to simulate the authenticity of student behavior, rather than directly using real student responses as ground truth**.\\n- We recognize that this experimental setup may have caused some confusion, so we more clearly explain the experiment\\u2019s objectives and the specific role of real students in the design (see Sec 5.2).\\n\\nThanks again for the valuable feedback. This study validates the human-likeness of virtual student behavior through the Turing Test and multi-dimensional subjective evaluation, supplemented by objective metrics. While the diversity and openness of language tasks limit the direct application of ground truth, our methods align with mainstream practices in the field of generative agent research. This work represents a critical step in validating the theoretical foundation and practical value of the SOE pipeline.\\n\\n[1] Park, J. S., O'Brien, J., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023, October). Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th annual acm symposium on user interface software and technology (pp. 1-22).\"}", "{\"title\": \"Response to Reviewer rtkz's concern about adding objective evaluation metrics\", \"comment\": \"***W1&W2&Q1: Beyond human and GPT-4 evaluations, did you consider incorporating objective metrics such as linguistic coherence, response diversity, or sentiment analysis to evaluate the LVSA\\u2019s responses?***\\n\\n**Response**: We sincerely thank the valuable suggestion regarding the evaluation methods in this study. We fully agree that incorporating objective metrics can enhance the scientific rigor of the evaluation process. Accordingly, we have conducted additional experiments, and the results align with the conclusions drawn from subjective evaluations (**see Sec 5.5 & Appendix D.6**). **These findings validate the personalization and effectiveness of LVSA modeling while also revealing the limitations of objective metrics, further supporting the core motivation of this study: addressing the challenge of \\\"difficult evaluation of virtual students.\\\"** \\n\\nBelow is our detailed response.\\n\\n**1. Objective Evaluation Metrics**\\n\\n**As noted in common concern 2**, we selected objective metrics (**Text Token, Perplexity, TTR, and Sentiment Analysis**) to evaluate the diverse and open-ended nature of student language in educational scenarios, as traditional reference-based metrics like BLEU and ROUGE were unsuitable. These metrics highlighted differences in fluency, diversity, and emotional tendencies across personality traits, aligning with subjective evaluations. For example, HE exhibited clarity and positive sentiment, while LO and LC reflected conservative or casual styles. **However, limitations such as TTR's dependency on text length, Perplexity's sensitivity to technical terms, and directional bias in Sentiment Analysis revealed the need for subjective evaluations to capture the complexity of student language and emotions.** Combining subjective evaluations with GPT-4 automated assessments balanced rigor and efficiency, offering a practical and comprehensive framework for future virtual student modeling.\\n\\n**2. Potential of Psychometric Methods and Future Plan**\\n- **Scientific Value**: We would like to express our gratitude for your suggestion to use psychometric methods to evaluate the personality traits of virtual students. We have thoroughly reviewed relevant literature, paying special attention to the IPIP-NEO questionnaire, which is known for its high reliability and validity in assessing the Big Five personality traits. This tool provides standardized support for validating the consistency between virtual student behaviors and their predefined personality traits.\\n- **Limitations of the Current Study**: We have carefully considered the feasibility of applying this method to our current study. After thorough deliberation, we decided not to include this experiment at this stage due to existing experimental constraints. The IPIP-NEO questionnaire is primarily designed for human individuals, and adapting it for evaluating virtual students requires modifications to its content and scope, along with validation to ensure its effectiveness in virtual environments. This process involves extensive pre-experiments and adjustments, which could not be fully undertaken within the current timeframe and resource limitations.\\n- **Future Plans**: The psychometric approach you suggested will be a priority in our future research agenda. We plan to adapt the IPIP-NEO questionnaire into a personalized measurement scale suitable for virtual students, ensuring its reliability and validity. \\n\\nAdditionally, we will integrate psychometric results with subjective evaluations and objective metrics to create a more comprehensive evaluation system. Once again, thanks for the reviewer's highly valuable suggestion.\\n\\n**We sincerely thank the reviewer for the valuable suggestion regarding objective evaluations and psychometric tools.** These recommendations not only provided important references for the evaluation system in this study but also broadened our perspective on virtual student modeling and evaluation. In this study, we validated the effectiveness of LVSA through supplementary objective evaluation experiments while identifying the limitations of objective metrics in educational scenarios, further highlighting the value of subjective evaluation. Although psychometric tools were not incorporated due to experimental constraints, this approach offers an inspiring new direction for our future research endeavors. In future studies, we will use your suggestions as guidance to further refine our research methods, integrating psychometric tools with the evaluation of multidisciplinary tasks to construct a more comprehensive and scientific evaluation framework for virtual students.\"}", "{\"title\": \"Overall response to Reviewer 3QkY\", \"comment\": \"We sincerely thank the reviewer for their thorough review and valuable comments. We greatly appreciate the insightful questions raised, which have prompted us to further reflect on and improve our research. For the three core concerns you raised, we have provided detailed responses in the corresponding sections. Once again, we thank the reviewer for their careful reading and thoughtful guidance.\\n\\n**1. Innovation of the Framework**\\n\\nWe understand the reviewer's focus on the innovation of the framework. We fully agree that innovations in interdisciplinary research often have multiple interpretations and perspectives, and we remain open to further discussion and exploration. We have addressed this concern in the section on the framework\\u2019s innovation, emphasizing that the SOE framework is not merely an algorithmic innovation but a new approach that integrates interdisciplinary theories and large language model technologies. This approach provides a novel solution to the challenges of virtual student modeling and evaluation in educational practice. **While our work differs from traditional AI algorithmic innovation, it combines quantitative research from AI with qualitative research from social science theories, and we believe it holds scientific value in its own right.**\\n\\n**2. Absence of Baseline Models**\\n\\nRegarding the reviewer\\u2019s comment on the lack of baseline comparisons, we understand the concern and have explained that existing benchmark models in the educational field predominantly rely on large language models using prompt engineering. Our study, however, focuses on the application of fine-tuning, which differentiates it from these baseline models. **We have made efforts to compare results from prompt engineering with those from fine-tuning and will add clarifications on this aspect in the final version paper.**\\n\\n**3. Generalization Validation**\\n\\nRegarding dataset validation, we understand the reviewer's focus on multi-dataset validation. While multi-dataset validation is not the primary focus of our work, the current emphasis is on verifying the adaptability of the SOE pipeline across different teaching stages and problem types. However, in response to the reviewer's keen interest, we have provided a detailed explanation of the uniqueness of the dataset we are using and the reasons why we could not select other publicly available datasets. **We also emphasize that, compared to cross-linguistic datasets, which mainly assess the language transferability of large models, hierarchical understanding presents a greater challenge in the context of virtual student modeling under the SOE pipeline.** Based on this, we argue that validating the pipeline's effectiveness in a Chinese-language environment provides strong evidence of its validity, and we expect it to be even more adaptable in a more general English-language context.\\n\\nWe greatly appreciate the reviewer's questions and the profound insights they have provided. **We recognize that understanding interdisciplinary innovations is inherently subjective, and our research does not aim to achieve breakthroughs in algorithms but to offer a new research framework for virtual student modeling and educational evaluation in the AI for Education field.** Through discussions with the reviewer, we have further refined our work and clarified the direction of our research, which has been an invaluable process for us.\\n\\n**Once again, we thank the reviewer for the careful review and invaluable suggestions. We will continue to improve our paper based on your feedback and look forward to further discussions and refinements in the future.**\"}", "{\"title\": \"Response to Reviewer rtkz's concern about Long-term experimental follow-up in the future\", \"comment\": [\"***Q6: Have you considered conducting longitudinal studies to assess whether interactions with LVSA lead to measurable improvements in teachers\\u2019 skills, particularly in classroom management or personalized instruction?***\", \"**Response**: We sincerely thank the valuable suggestion to conduct long-term follow-up studies in future research to evaluate the impact of LVSA on the skill development of pre-service teachers. **First, we greatly appreciate the reviewer's recognition of the research value of this work. This suggestion, along with the Q2 proposal to \\\"transition LVSA to real educational scenarios,\\\" aligns closely with the plans for our future research agenda and represents a key focus for further exploration of LVSA.** The conclusions and contributions of this study have laid a solid theoretical and practical foundation for future long-term follow-up research, which has been added in the discussion section.\", \"Below is our detailed response.\", \"**1. Key Focus Areas for Future Work**\", \"**Designing Diverse Virtual Student Interaction Scenarios**: Develop virtual student groups that encompass a variety of personality traits, disciplinary and cultural backgrounds, and teaching styles, along with visual interface designs. By designing human-computer interaction logic, ensure the usability and accessibility of the simulation platform to enhance pre-service teachers' acceptance of the technology.\", \"**Establishing Personal Digital Records for Skill Development**: Create personal electronic portfolios for pre-service teachers, tracking their growth across multiple dimensions such as classroom management, personalized teaching design, and student interaction skills. This will help quantify the long-term impact of LVSA on their skill development.\", \"**In-Depth Assessment of Personalized Teaching Abilities**: Focus on how teachers optimize and improve their teaching strategies through interactions with diverse virtual students and explore how these skills can be transferred to real classrooms. Emphasize the evaluation of pre-service teachers' teaching practices by integrating quantitative methods (e.g., AI-driven data analysis, teacher and virtual student profiling) and qualitative methods (e.g., teaching reflections and interviews). These multidimensional data analysis approaches will enable a comprehensive assessment of teaching improvements in pre-service teachers.\", \"**2. Potential Challenges and Solutions in Long-Term Follow-Up Studies**\", \"**Stability of Participant Engagement**:\", \"**Challenge**: Long-term studies may face the issue of participant attrition among teachers, which could affect the continuity of the research and data integrity.\", \"**Solution**: Establish partnerships with educational institutions to ensure teacher stability and design incentive mechanisms to increase their engagement.\", \"**Coordination of Resources**:\", \"**Challenge**: Long-term experiments require significant resources and meticulous project management.\", \"**Solution**: Implement the research project in phases, starting with small-scale experiments to gain experience and gradually expand the scale of the study to alleviate resource pressures.\", \"Conducting long-term follow-up studies will enable us to comprehensively evaluate the educational impact of LVSA, particularly its effectiveness in enhancing teachers\\u2019 practical teaching skills. Furthermore, such research can provide valuable empirical data for the field of educational technology, contributing to broader educational innovation.\", \"We look forward to implementing this suggestion in the future, using long-term experimental follow-up to deeply explore the potential applications of LVSA in educational practice and to provide scientific evaluation and empirical support for technology adoption in global educational contexts. Once again, we sincerely thank the valuable suggestion, which will greatly enrich the content and depth of our research.\"]}", "{\"title\": \"Response to Reviewer rtkz's concern about application and validation of LVSA in real educational scenarios\", \"comment\": \"***Q2: Do you have plans to test the LVSA in real-world educational settings or with pre-service teachers? If so, what are your anticipated challenges in transitioning from simulation to actual classroom interactions?***\\n\\n**Response**: We sincerely thank the reviewer for the valuable suggestion regarding the application and validation of LVSA in real educational scenarios. We agree that extending LVSA research from simulated environments to real classroom settings could further verify its practical utility. However, the core objective of this study is to propose and validate the SOE pipeline to address key challenges in the modeling and evaluation of virtual students. Based on the current research goals and its achieved milestones, we believe this study has fully realized its primary contributions. The application and validation of LVSA in real educational scenarios is an incremental enhancement and will be a key focus of our future work. However, given the current challenges, this is difficult to achieve in the short term. We will follow the reviewer's advice to gradually advance this direction once the current work is well-received, ensuring the continuity and practical relevance of the research. \\n\\nBelow is our detailed response.\\n\\n**1. Focus and Achievements of the Current Study**\\n\\n- **Research Positioning**: \\nThe core objective of this study is to propose and validate the SOE pipeline to address key challenges in the modeling and evaluation of virtual students. In our experimental design, we chose a simulated environment as the focus of the study to ensure the framework's scientific rigor and generalizability. This approach is also a common practice in related fields. For instance, prior studies typically validate the behavioral characteristics of virtual students and the effectiveness of evaluation frameworks in simulated environments before gradually extending them to real-world applications [1-3].\\n- **Completeness and Current Objectives**: \\nThis study has comprehensively demonstrated the modeling and evaluation process of virtual students in a simulated environment, achieving its preliminary research objectives. The successful experiments conducted in the simulated environment provide a solid theoretical and methodological foundation for potential future applications in real-world scenarios.\\n\\n**2. Challenges of Testing in Real Educational Scenarios**\\n- **Complexity of Human-Computer Interaction**: \\nDeploying virtual students in real educational scenarios requires careful consideration of interaction design with actual users, namely pre-service teachers. This includes how to effectively integrate feedback from virtual students into the teaching workflow.\\n- **Technical and Experimental Design Preparations**: \\nWhile the core functionalities of LVSA have been validated in a simulated environment, further development and adjustments are needed to adapt it to the dynamic nature of real classroom settings. This includes enhancing its responsiveness to non-verbal interactions.\\n- **Time and Resource Constraints**: \\nField testing demands extensive preparation, including collaboration with educational institutions, adaptive adjustments to experimental design, and data collection and processing during real-world implementation. These are both time- and resource-intensive activities.\\n\\nIn summary, based on the objectives and current achievements of this study, we believe that the core contributions have been fully realized. While our short-term focus will remain on optimizing and refining the research within the simulated environment, in the long term, we highly value and plan to conduct field testing in real educational scenarios to comprehensively evaluate the educational value and practicality of LVSA. Once again, we sincerely thank the reviewer for the valuable suggestion, and we look forward to achieving this goal in future work.\\n\\n[1] Yue, M., Mifdal, W., Zhang, Y., Suh, J., & Yao, Z. (2024). MathVC: An LLM-Simulated Multi-Character Virtual Classroom for Mathematics Education. arXiv preprint arXiv:2404.06711.\\n\\n[2] Wang, L., Zhang, J., Yang, H., Chen, Z., Tang, J., Zhang, Z., ... & Wen, J. R. (2023). When large language model based agent meets user behavior analysis: A novel user simulation paradigm. arXiv preprint arXiv:2306.02552.\\n\\n[3] Gao, C., Lan, X., Lu, Z., Mao, J., Piao, J., Wang, H., ... & Li, Y. (2023). S3: Social-network Simulation System with Large Language Model-Empowered Agents. arXiv preprint arXiv:2307.14984.\"}" ] }
BzsjHiBfLk
Flow Distillation Sampling: Regularizing 3D Gaussians with Pre-trained Matching Priors
[ "Lin-Zhuo Chen", "Kangjie Liu", "Youtian Lin", "Zhihao Li", "Siyu Zhu", "Xun Cao", "Yao Yao" ]
3D Gaussian Splatting (3DGS) has achieved excellent rendering quality with fast training and rendering speed. However, its optimization process lacks explicit geometric constraints, leading to suboptimal geometric reconstruction in regions with sparse or no observational input views. In this work, we try to mitigate the issue by incorporating a pre-trained matching prior to the 3DGS optimization process. We introduce Flow Distillation Sampling (FDS), a technique that leverages pre-trained geometric knowledge to bolster the accuracy of the Gaussian radiance field. Our method employs a strategic sampling technique to target unobserved views adjacent to the input views, utilizing the optical flow calculated from the matching model (Prior Flow) to guide the flow analytically calculated from the 3DGS geometry (Radiance Flow). Comprehensive experiments in depth rendering, mesh reconstruction, and novel view synthesis showcase the significant advantages of FDS over state-of-the-art methods. Additionally, our interpretive experiments and analysis aim to shed light on the effects of FDS on geometric accuracy and rendering quality, potentially providing readers with insights into its performance.
[ "3D Vision", "Differentiable Rendering", "3D Gaussian Splatting" ]
Accept (Poster)
https://openreview.net/pdf?id=BzsjHiBfLk
https://openreview.net/forum?id=BzsjHiBfLk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yB8nohFbsK", "u6zCE4oxMV", "qM5sle9zRD", "oXRkDUScHG", "lqDQtQxIzF", "krMhsK3VyV", "knq9dtY0CC", "iqqUZa4U2h", "ifuQxS8V2i", "efnGNmoE69", "eH2hR6TWLF", "dVJN75aV8Q", "cyeWZxVqr3", "cAyYqQNSaP", "V86aJmd3hC", "U75EqkMBBD", "OmGCezzS2L", "MPehZWEMl7", "KLBMDFs10I", "HK5VV9sLWS", "FLFF3Fm59R", "DMzcPDV0Zg", "6sHen643tN", "57sZLTA923", "4gdQjEwqjz", "4Ic43lKGcH", "0acMw1j3m8", "0UylPMDibS" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732416708112, 1732619231035, 1729474971888, 1732799064195, 1732344064139, 1732335015958, 1732718173281, 1732347936951, 1732334903996, 1732335128765, 1732718800049, 1732339007699, 1730688766880, 1730134973922, 1732336439340, 1732792664514, 1732515800757, 1733046516649, 1732718734305, 1730666737354, 1733023159402, 1737523749329, 1732987683294, 1732620431813, 1732339102805, 1732336363021, 1732344121513, 1734688893519 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6189/Reviewer_LTym" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Reviewer_LTym" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Reviewer_GPCw" ], [ "ICLR.cc/2025/Conference/Submission6189/Reviewer_w1zd" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Reviewer_148K" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Reviewer_148K" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6189/Reviewer_w1zd" ], [ "ICLR.cc/2025/Conference/Submission6189/Reviewer_w1zd" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Authors" ], [ "ICLR.cc/2025/Conference/Submission6189/Area_Chair_r8B3" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your detailed rebuttal. I recommend acceptance of the paper.\"}", "{\"title\": \"Supplemental Details for Question 3\", \"comment\": \"**[Q3]: Comparisons to neural field based methods such as Geo-NeUS or NeuralAngelo would significantly strengthen the claim of state-of-the-art performance.**\\n \\n**[A3] (Supplement):** We evaluated the geometric reconstruction performance of the NeuralAngelo method on the Mushroom dataset. We used the original method's data preprocessing script, set the scene type to \\\"indoor,\\\" and processed the ground truth camera poses from Mushroom along with the initial point cloud from other experiments in this paper to obtain the bounding sphere of interest. The experimental parameters were configured exactly as in the authors' paper for the TnT dataset, with a batch size of 16 and 500k iterations.\\n\\nSince the NeuralAngelo method was not designed to reconstruct scene geometry from sparse viewpoints, its performance on Mushroom was suboptimal. Specifically, in the \\u201csauna\\u201d scene, using the default preprocessing script caused the NeRF model to diverge. Following the recommendations of the NeuralAngelo authors, we tried adjusting the radius of the bounding sphere of interest. While the model no longer diverged, it tended to overfit to the input viewpoints during training.\\n\\nTo ensure a fair comparison, we exclude the 'sauna' scene, where reconstruction failed by NeuralAngelo, and compare the average results of the remaining four scenes, as shown in the table below:\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Time \\u2193 |\\n|-----------------|----------|----------|----------|----------|-----------|--------|\\n| **NeuralAngelo** | 0.1016 | 0.0946 | 0.0981 | 0.6505 | 0.4480 | >128h |\\n| **2DGS** | 0.1076 | 0.0826 | 0.0951 | 0.7788 | 0.5326 | **0.8h** |\\n| **2DGS+FDS** | **0.0635** | **0.0505** | **0.0570** | **0.8102** | **0.7064** | 1.3h |\"}", "{\"summary\": \"In this paper, Flow Distillation Sampling (FDS) is proposed to improve the geometric accuracy and rendering quality of 3D Gaussians Splatting.\\nFDS first adopts a camera sampling scheme to sample unobserved views near the training views, and then uses the flow predicted by the pre-trained model to guide the flow calculated from the 3DGS geometry.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The ideas are intuitive, the paper is well written and easy to understand.\\n2. FDS leverages the matching prior mitigate the overfitting problem and enhance the geometry.\", \"weaknesses\": [\"1. More ablation studies are needed.\", \"The depth-adaptive radius is calculated by Eq. (8). How to determine the value of the hyperparameter $\\\\sigma$? Is FDS robust to different $\\\\sigma$?\", \"Is FDS must require the normal consistency loss $\\\\mathcal{L}_{n}$? In Table 3, how is the performance of \\u201c2D-GS+FDS\\u201d?\", \"How to determine the weight for FDS $\\\\lambda_{fds}$?\", \"How to determine the start iteration (e.g., 15,000) of applying FDS?\", \"2 Lack of visual comparison on the ScanNet dataset.\"], \"questions\": \"See `Weakness`.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response and for improving your rating. We\\u2019re delighted to receive and follow your suggestions.\"}", "{\"comment\": \"**[Q1]: The reliability of the prior flow cannot be assured under certain sparse viewpoint configurations.**\\n\\n**[A1]:** Our FDS introduces two strategies to improve the reliability of prior flow.\\nThe first is controlled overlap between the **input view** and its\\n**sampled unobserved view**,\\nwhich ensures controlled overlap and average flow.\\nAccording to the derivation in our updated paper, \\nthe hyperparameter $\\\\sigma$ in FDS represents the average radius of the \\n2D flow between the current input view and sampled view.\\nThis ensures that the optical flow model can generate accurate optical flow under suitable overlap.\\nSecondly, \\nthe random sampling introduced by FDS also provides an advantage. FDS effectively generates a\\nsufficient number of sampled viewpoints during training which can help to average out errors. We tested our FDS by fixing the sampling viewpoints instead of using random sampling on the Mushroom dataset. The results are shown below:\\n\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|-------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 |\\n| 2DGS + FDS (fixed sampling) | 0.0729 | 0.0617 | 0.0673 | 0.8015 | 0.6312 | 0.0724 | 23.97 | 0.8260 | 0.2623 |\\n| 2DGS + FDS | **0.0615** | **0.0534** | **0.0574** | **0.8151** | **0.6974** | **0.0561** | **24.06** | **0.8271** | **0.2610** |\\n\\nWe can see that the random sampling further helps 2DGS improve geometric accuracy.\\n\\n**[Q2]: The method's reliance on the performance of a pretrained optical flow model restricts its generalization.**\\n\\n**[A2]:** Although our model is limited by the accuracy of the optical flow model, the upper bound of our FDS will continue to improve with the emergence of more annotated data and larger models. \\nTo validate our idea, we replae RAFT with the more advanced SEA-RAFT model in our updated paper, the accuracy of geometric reconstruction has been further enhanced.\\n\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|-------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS + FDS (Raft) | 0.0689 | 0.0646 | 0.0667 | 0.8042 | 0.6582 | 0.0589 | 23.98 | 0.8255 | 0.2621 |\\n| 2DGS + FDS (Sea Raft) | **0.0615** | **0.0534** | **0.0574** | **0.8151** | **0.6974** | **0.0561** | **24.06** | **0.8271** | **0.2610** |\\n\\n**[Q3]: It is important to point out that these datasets are primarily limited to indoor scenes.**\\n\\n**[A3]:** To test our FDS on more diverse datasets, we have added results of\\nour FDS on DTU datasets shows below. Although our FDS is primarily designed to mitigate the issue\\nof insufficient sampling in observation region, \\nwe still achieve improvements on the DTU dataset with dense observation.\\n\\n| Method | 24 | 37 | 40 | 55 | 63 | 65 | 69 | 83 |\\n|---------------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| 2DGS | **0.48** | 0.86 | 0.36 | 0.43 | 0.90 | **0.94** | 0.80 | 1.27 |\\n| 2DGS+FDS | 0.51 | **0.85** | 0.36 | 0.43 | **0.79** | 1.00 | **0.77** | **1.23** |\\n\\n| Method | 97 | 105 | 106 | 110 | 114 | 118 | 122 | mean |\\n|---------------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| 2DGS | 1.30 | **0.72** | 0.70 | 1.24 | 0.47 | 0.70 | 0.58 | 0.78 |\\n| 2DGS+FDS | **1.06** | 0.73 | **0.65** | **1.14** | **0.44** | **0.58** | **0.53** | **0.73** |\"}", "{\"title\": \"Rebuttal by Authors (2)\", \"comment\": \"**[Q3]: Does FDS require the normal consistency loss ?**\\n\\n**[A3]:** The normal consistency loss is not an essential component of FDS. The normal consistency loss is a regularization loss introduced in 2dgs which can help to align the splats\\u2019 normal with the gradients of the depth. Our FDS can also help to improve the reconstruction without this loss. We add the results of 2DGS (w/o nc loss) and FDS+2DGS (w/o nc loss) on the Mushroom dataset:\\n\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|-------------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 |\\n| 2DGS + FDS | **0.0615** | **0.0534** | **0.0574** | **0.8151** | **0.6974** | **0.0561** | **24.06** | **0.8271** | **0.2610** |\\n|-------------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS (w/o nc loss) | 0.1643 | 0.0904 | 0.1273 | 0.6982 | 0.3853 | 0.1165 | 23.80 | 0.8189 | 0.2627 |\\n| 2DGS (w/o nc loss) + FDS | **0.0774** | **0.0473** | **0.0624** | **0.7778** | **0.6527** | **0.0578** | **24.32** | **0.8290** | **0.2541** |\\n\\n**[Q4]: In Table 3, how is the performance of \\u201c2D-GS+FDS\\u201d ?**\\n\\n**[A4]:** The results of \\\"2D-GS+FDS\\\" compared with other prior\\ninformation on the Mushroom dataset\\nis shown in Table 3 of updated paper.\\n\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|-------------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 |\\n| 2DGS+Depth | 0.0862 | 0.0702 | 0.0782 | 0.8153 | 0.5965 | 0.0672 | 23.92 | 0.8227 | 0.2619 |\\n| 2DGS+Normal | 0.0939 | 0.0637 | 0.0788 | 0.8359 | 0.5782 | 0.0768 | 23.78 | 0.8197 | 0.2676 |\\n| 2DGS+FDS | **0.0615** | **0.0534** | **0.0574** | **0.8151** | **0.6974** | **0.0561** | **24.06** | **0.8271** | **0.2610** |\\n|-------------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS+Depth+FDS | 0.0561 | 0.0519 | 0.0540 | 0.8295 | 0.7282 | 0.0454 | **24.22** | **0.8291** | **0.2570** |\\n| 2DGS+Normal+FDS | **0.0529** | **0.0450** | **0.0490** | **0.8477** | **0.7430** | **0.0443** | 24.10 | 0.8283 | 0.2590 |\\n| 2DGS+Depth+Normal | 0.0695 | 0.0513 | 0.0604 | 0.8540 | 0.6723 | 0.0523 | 24.09 | 0.8264 | 0.2575 |\\n|-------------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS+Depth+Normal+FDS | **0.0506** | **0.0423** | **0.0464** | **0.8598** | **0.7613** | **0.0403** | **24.22** | **0.8300** | **0.0403** |\\n\\nOur \\u201c2DGS + FDS\\u201d achieves better performance compared to using any other single prior information individually. In combinations with other prior injections, our FDS makes the most significant contribution to performance improvement and can be effectively combined with other prior information to achieve enhanced results.\\n\\n**[Q5]: How to determine the weight for FDS ?**\\n\\n**[A5]:** Similar to other methods supervised by prior information, \\nthis weight is determined based on the ratio of values between \\ndifferent loss functions. \\nWe set the weight of the FDS loss to be approximately double that of the L1 image loss function.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you for your response. Applying our method to NeRF might lead to improvements.\\nHowever, our approach requires rendering entire images of neighboring views, whereas NeRF renders on a per-ray optimization. Rendering full images in NeRF would drastically increase memory consumption and time during training, which in turn makes debugging more challenging and difficult to complete in limited time. \\n\\n3DGS is better suited for our method with its higher rendering quality and speed. We have followed the original suggestion from 148K and compared our method with NeRF-based methods. We plan to explore applying FDS to NeRF soon. Thank you for your suggestion!\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We sincerely thank all the reviewers for their constructive and insightful advice.\\nWe are encouraged that all reviewers consider the proposed FDS intuitive and interesting.\\n- Reviewer w1zd: The idea is simple but makes intuitive sense. This paper offers a promising direction of using pair-wise information prior to advance sparse view reconstruction.\\n- Reviewer LTym: The ideas are intuitive, the paper is well written and easy to understand.\\n- Reviewer 148K: The paper describes an interesting idea on incorporating a flow prior for 3D reconstruction with Gaussian Splatting.\\n- Reviewer GPCw: This approach effectively enhances both the reconstruction quality and rendering quality of existing 3DGS-based methods.\\n\\nMeanwhile, we acknowledge the reviewers' primary concern that\\nthis paper lacks evaluation on some widely used geometry reconstruction dataset, such\\nas DTU.\\nSince DTU is an object-level dense observation dataset, our FDS is\\nprimarily designed to mitigate the issue of insufficient sampling in observation regions. \\nTherefore,\\nwe did not prioritize testing on the DTU dataset initially. \\nWe have now included results of our FDS on the DTU dataset in our supplementary material, as shown below.\\nWhile FDS performs better under sparse observations, it still achieves notable improvements on the DTU dataset with dense observations.\\n\\n| Method | 24 | 37 | 40 | 55 | 63 | 65 | 69 | 83 |\\n|---------------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| 2DGS | **0.48** | 0.86 | 0.36 | 0.43 | 0.90 | **0.94** | 0.80 | 1.27 |\\n| 2DGS+FDS | 0.51 | **0.85** | 0.36 | 0.43 | **0.79** | 1.00 | **0.77** | **1.23** |\\n\\n| Method | 97 | 105 | 106 | 110 | 114 | 118 | 122 | mean |\\n|---------------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| 2DGS | 1.30 | **0.72** | 0.70 | 1.24 | 0.47 | 0.70 | 0.58 | 0.78 |\\n| 2DGS+FDS | **1.06** | 0.73 | **0.65** | **1.14** | **0.44** | **0.58** | **0.53** | **0.73** |\\n\\nWe have updated the submitted paper based on each reviewer's suggestions. The main updates include:\\n- We replaced the original RAFT model with a more advanced optical flow model, SEA-RAFT, enabling FDS to achieve a significant improvement in its performance. The experimental results presented in the submitted paper have all been updated accordingly.\\n- We have updated Table 3 to include more comprehensive ablation experiments on prior supervision.\\n- We have added detailed derivations to the paper to clarify the physical meaning of the hyperparameter $\\\\sigma$, providing valuable insights for its configuration.\\n\\nWe have carefully addressed the additional concerns \\nraised by each reviewer in the corresponding official comments. \\nFinally, we extend our heartfelt gratitude to all reviewers \\nand remain open to further \\nsuggestions to improve all aspects of our work.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q1]:How to determine the value of the hyperparameter $\\\\sigma$?**\\n\\n**[A1]**:Thank you for your question. we have derived the physical meaning of this hyperparameter $\\\\sigma$,\\nwhich represents the average radius of the 2D flow between the current input view and the unobserved sampling view. \\nTherefore, this parameter directly corresponds to a measurable property and can be determined by the flow length that the optical flow model predicts the most accurately, which is typically related to the training data of the optical flow model.\\nThis parameter can be shared across different types of datasets, \\nincluding DTU datasets (a object centered dataset with dense sampling) \\nand Mushroom datasets (an indoor datasets with sparse sampling.)\\nWe have updated the results of FDS in our paper.\\n\\n**Derivation:**\\n\\nAs noted in [1], the rotation flow in \\nimage warping is independent of depth. Therefore, we can set the \\nrotation part to identity matrix. so that \\nthe transformation between the input view $i$ and \\nits sampled view $s$ is pure translation, we can change the \\nEqu.(4) in our paper to:\\n\\n\\n$$\\nD^s(u_2 , v_2) \\\\begin{bmatrix}\\n u_2 \\\\\\\\\\n v_2\\\\\\\\\\n 1\\n \\\\end{bmatrix} ^T\\n=D^i(u_1 , v_1)\\n \\\\begin{bmatrix}\\n u_1\\\\\\\\\\n v_1\\\\\\\\\\n 1\\n \\\\end{bmatrix}^T + K\\n \\\\begin{bmatrix}\\n t_1\\\\\\\\\\n t_2\\\\\\\\\\n t_3\\n \\\\end{bmatrix}^T\\n$$\\n\\nwhere $K$ is the intrinsic matrix of the camera, after solving the above equation, we get:\\n\\n$$\\n\\\\begin{bmatrix}\\n u_2\\\\\\\\\\n v_2\\\\\\\\\\n \\\\end{bmatrix}^T\\n= \\\\begin{bmatrix}\\n\\\\frac{D_i(u_1, v_1)u_1+f_xt_1+c_xt_3}{D_i(u_1, v_1) + t_3} \\\\\\\\\\n\\\\frac{D_i(u_1, v_1)u_1+f_yt_2+c_xt_3}{D_i(u_1, v_1) + t_3} \\n\\\\end{bmatrix}^T\\n$$\\n\\n\\nWe set $ t_3 = 0 $ in our camera sampling scheme and assume camera intrinsic parameters: $f_x\\\\approx f_y=f$. \\nThe radiance flow $F^{i\\\\rightarrow s}(u_1, v_1) = \\\\begin{bmatrix}\\n u_2 - u_1\\\\\\\\\\n v_2 - v_1\\\\\\\\\\n \\\\end{bmatrix}^T$from the training view $i$ to its sampled view $s$ is shown below:\\n\\n$$\\nF^{i\\\\rightarrow s}(u_1, v_1) = \\n \\\\begin{bmatrix}\\n \\\\frac{f}{D_i(u_1, v_1)}t_1\\\\\\\\\\n \\\\frac{f}{D_i(u_1, v_1)}t_2\\n \\\\end{bmatrix}\\n$$\\n\\nWe aim to keep the value of\\n$||F^{i\\\\rightarrow s}(u_1, v_1)||_2$ constant for the pixel $x = (u_1, v_1)$\\nduring each camera sampling. By setting $||F^{i\\\\rightarrow s}(u_1, v_1)||_2 = \\\\sigma $,\", \"we_get\": \"$$\\n \\\\epsilon_t = \\\\sqrt{t_1^2 + t_2^2} = \\\\sigma \\\\frac{D_i(u_1, v_1)}{f}\\n$$\\n\\n\\n\\nThus, the radius of translation in our camera sampling \\nis defined as \\n$\\\\epsilon_t = \\\\sigma \\\\frac{D_i(u_1, v_1)}{f} $ \\nwhich helps maintain stable flow.\\nThe parameter $\\\\sigma$ can be tuned as a hyperparameter. \\nGiven that pixel depths vary within an image, \\nwe use the mean depth $ \\\\bar{D_i} $ of the image \\nand set the radius of our translation\\n$\\\\epsilon_t = \\\\sigma \\\\frac{\\\\bar{D_i}}{f} $.\\n$||F^{i\\\\rightarrow s}(u_1, v_1)||_2 = \\\\sigma $, demonstrating that $\\\\sigma$ \\nrepresents the average radius of the\\n2D flow between the current input view and its unobserved sampling view.\\n\\n**[Q2]: Is FDS robust to different radius ?**\\n\\n**[A2]:** As noted in A1, the parameter radius represents \\nthe average radius of the 2D flow between the current \\ninput view and its unobserved sampling view. \\nSo our FDS is robust to different radius in a proper range \\nincluded in the training dataset of optical flow model. \\nWe added an experiment to validate our statements on the Mushroom dataset.\", \"the_results_are_shown_below\": \"| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|-------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 |\\n| 2DGS + FDS (\\u03c3=11.5) | 0.0765 | 0.0593 | 0.0679 | 0.8107 | 0.6469 | 0.0574 | 23.94 | 0.8244 | 0.2633 |\\n| 2DGS + FDS (\\u03c3=23) | 0.0615 | **0.0534** | 0.0574 | **0.8151** | 0.6974 | **0.0561** | 24.06 | 0.8271 | 0.2610 |\\n| 2DGS + FDS (\\u03c3=30) | **0.0594** | 0.0539 | **0.0566** | 0.8089 | **0.7023** | 0.0571 | **24.09** | **0.8276** | **0.2609** |\\n\\nIt can be observed that both increasing and decreasing the radius allow FDS to achieve a consistent level of improvement.\\n\\n[1]. Bian, Jia-Wang, et al. \\\"Auto-rectify network for unsupervised indoor depth estimation.\\\" IEEE transactions on pattern analysis and machine intelligence 44.12 (2021): 9802-9813.\"}", "{\"title\": \"Rebuttal by Authors (3)\", \"comment\": \"**[Q6]: How to determine the start iteration (e.g., 15,000) of applying FDS ?**\\n\\n**[A6]:** We set the start iteration of FDS loss to \\n15000 to maintain the same number of points as the baseline for fair comparison.\\n(The densification of points is stopped after 15000 iterations.)\", \"we_tested_different_starting_iterations_on_the_mushroom_dataset_and_the_results_are_presented_below\": \"| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|-------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 |\\n|-------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS + FDS (15000) | 0.0615 | **0.0534** | **0.0574** | **0.8151** | **0.6974** | 0.0561 | 24.06 | 0.8271 | 0.2610 |\\n| 2DGS + FDS (10000) | **0.0590** | 0.0602 | 0.0596 | 0.8053 | 0.6932 | **0.0545** | **24.21** | **0.8300** | **0.2579** |\\n| 2DGS + FDS (7000) | 0.0640 | 0.0538 | 0.0589 | 0.8059 | 0.6873 | 0.0584 | 24.13 | 0.8278 | 0.2583 |\\n\\n\\nIn our experience, incorporating the FDS loss when PSNR\\nstabilizes during training yields the best results.\\nWe set the starting iteration of the FDS loss to 15000 to not only maintain the same number of points as the baseline but also stabilize the training PSNR.\\n\\n**[Q7]: Lack of visual comparison on the ScanNet dataset.**\\n\\n**[A7]:** Thank you for your suggestion. We have added a visual comparison of ScanNet in Figure 3 of the updated paper.\"}", "{\"comment\": \"We hope our response has addressed your questions. As the discussion phase is coming to a close, we are looking forward to your feedback and would like to know if you have any remaining concerns we can address.\\nWe are grateful if you find our revisions satisfactory and consider raising your score for our paper.\\n\\nThank you once again for the time and effort you have dedicated to reviewing our paper.\\n\\nBest regards\\n\\nFlow Distillation Sampling Authors\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q1]: Baseline methods usually use more diverse datasets.**\\n\\n**[A1]:** Thank you for your advice. \\nWe did not prioritize testing on the DTU dataset initially since \\nDTU is an object-level dense observation dataset, and our FDS\\nis primarily designed to mitigate the issue of insufficient \\nsampling in observation regions.\\nTo test our FDS on more diverse datasets,\\nWe have now included results of our FDS on the DTU dataset, as shown below.\\nWhile FDS performs better under sparse observations, it still achieves notable improvements on the DTU dataset with dense observations.\\n\\n| Method | 24 | 37 | 40 | 55 | 63 | 65 | 69 | 83 |\\n|---------------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| 2DGS | **0.48** | 0.86 | 0.36 | 0.43 | 0.90 | **0.94** | 0.80 | 1.27 |\\n| 2DGS+FDS | 0.51 | **0.85** | 0.36 | 0.43 | **0.79** | 1.00 | **0.77** | **1.23** |\\n\\n| Method | 97 | 105 | 106 | 110 | 114 | 118 | 122 | mean |\\n|---------------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| 2DGS | 1.30 | **0.72** | 0.70 | 1.24 | 0.47 | 0.70 | 0.58 | 0.78 |\\n| 2DGS+FDS | **1.06** | 0.73 | **0.65** | **1.14** | **0.44** | **0.58** | **0.53** | **0.73** |\\n\\n**[Q2]: Do not provide evidence or explanations for depth distortion loss \\nand it is unclear how this influenced the quantitative comparison to 2DGS.**\\n\\n**[A2]:** We found that the depth distortion loss tends to \\nnegatively impact performance on indoor scenes.\\nTo validate this observation, we evaluated the performance \\nof \\u201c2DGS (with depth distortion loss)\\u201d and \\u201c2DGS(with depth distortion loss)+\\nFDS\\u201d. The results on the Mushroom dataset are shown below.\\n\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 |\\n| 2DGS (distortion loss) | 0.1225 | 0.1227 | 0.1226 | 0.7621 | 0.4707 | 0.1634 | 22.40 | 0.7916 | 0.3031 |\\n| 2DGS (distortion loss) + FDS | 0.0768 | 0.0705 | 0.0736 | 0.7999 | 0.6511 | 0.0975 | 23.27 | 0.8106 | 0.2815 |\\n| 2DGS + FDS | **0.0615** | **0.0534** | **0.0574** | **0.8151** | **0.6974** | **0.0561** | **24.06** | **0.8271** | **0.2610** |\\n\\nFor the DTU results in A1, the depth distortion loss is used by default in both 2DGS and 2DGS+FDS. Our FDS also improves reconstruction results.\\n\\n\\n**[Q3]: Comparisons to neural field based methods such as Geo-NeUS or NeuralAngelo would significantly strengthen the claim of state-of-the-art performance.**\\n\\n**[A3]:** Thank you for your advice. We are testing NeRF-based methods, such as NeuralAngelo, on the Mushroom dataset. \\nWe are running the relevant program and will update the results in about two days.\\n\\n**[Q4]: The overall quality and the presentation should be improved.**\\n\\n**[A4]:** Thank you for your advice. We have carefully revised our updated paper to improve the presentation quality and address identified errors in method and conclusion section.\"}", "{\"summary\": \"the geometric under-constrain problem of 3D Gaussian Splatting (3DGS) in sparse view setups. By incorporating pre-trained matching priors into the optimization process of 3DGS, this method significantly improves both the geometric accuracy and rendering quality of 3DGS.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The method proposed in this paper integrates matching priors derived from a pretrained optical flow model to guide the optimization of 3DGS. This approach effectively enhances both the reconstruction quality and rendering quality of existing 3DGS-based methods.\\n\\nThe methodology is well-structured, providing clear and detailed explanations of the proposed FDS technique, the adaptive camera sampling scheme, and the associated loss functions.\\n\\nComprehensive experimental evaluations are conducted across multiple datasets, demonstrating the method's effectiveness and robustness.\\n\\nAdditionally, the paper includes interpretive experiments to illustrate the mutual refinement process of the flows, thereby enhancing the understanding of the method's capabilities.\", \"weaknesses\": \"The results of the proposed method are constrained by the initial quality of the prior flow, however, the reliability of the prior flow cannot be assured under certain sparse viewpoint configurations.\\n\\nAs noted in the limitations section, the method's reliance on the performance of a pretrained optical flow model restricts its generalizability.\\n\\nWhile the authors have conducted experiments across multiple datasets, it is important to point out that these datasets are primarily limited to indoor scenes. It is recommended that the authors evaluate their method on a more diverse range of datasets to assess its applicability in various scenarios.\\n\\nThe paper lacks a discussion on the computational complexity of the method. It is recommended that the authors include a detailed report on the training and inference times of the model in the experimental section, along with comparative metrics against other existing methods.\", \"questions\": \"In the related work section, the review of existing prior art aimed at improving 3DGS performance should be more comprehensive and clearer, particularly concerning the relevant work on optical flow priors. Additionally, in the subsection on Prior Regulation for Rendering, there are sentences with grammatical errors that require careful review and correction.\\n\\nIn Algorithm 1, there are notation errors that need careful checking and correction.\\n\\nThe comparison of depth reconstruction experiments needs to be supplemented with results from other methods to validate the superiority of the proposed approach in geometric reconstruction.\\n\\nIn the dataset section, the paper mentions that the authors have evaluated their method on the Replica dataset, but the experimental results are not presented.\\n\\nThe authors provide limited comparisons with existing methods; it is recommended that they include more baseline methods for a more robust evaluation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to improve the 3D Gaussian Splatting reconstruction quality in regions with sparse or no observational input views, by integrating a pre-trained matching prior into the 3DGS optimization process. The matching prior is incorporated through optical flow from a pre-trained model, which supervises the Radiance Flow calculated using 3DGS-rendered depth. Additionally, the authors introduce a Flow Distillation Sampling scheme to efficiently sample unobserved camera views around the input views. The proposed Flow Distillation Loss effectively avoids the scale ambiguity existing in monocular priors. The authors present clear ablation studies and quanlitative improvements to support their claims.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The idea is simple but makes intuitive sense. Matching priors can provide absolute scale information in constrast to monocular priors. This paper offers a promising direction of using pair-wise information prior to advance sparse view reconstruction.\", \"This ablation study clearly demonstrates the improvement benefited from using this pair-wise matching prior without scale ambiguity. The quanlitative results shown in Table 3. validate the advatange compared to monocular depth prior, even to multi-view depth prior. A clear visualization of the mutual refinement of two flows is also provided.\"], \"weaknesses\": [\"This paper lacks evaluation on widely used geometry reconstruction and novel view synthesis benchmarks such as DTU, Tanks and Temples, and MipNeRF 360. The advantages of the proposed method would be more convincing if the authors could present results on one or more of these benchmarks.\", \"As mentioned in line 252, both the Prior Flow and Radiance Flow suffer from inaccuracies, raising concerns about the stability of the benefits provided by the proposed Flow Distillation Loss. It is possible that this loss could introduce artifacts or incorrect guidance due to bias. While the metrics in Table 3 appear strong, it\\u2019s unclear why the loss significantly outperforms multi-view depth supervision, which does not suffer from inaccurate prior flow. More explanation and analysis are needed to clarify this point.\"], \"questions\": [\"Given that RAFT is computed at every time step, how does the training time for 2DGS + FDS compare to 2DGS?\", \"Why do both monodepth and multi-view depth seem to only worsen the results, as shown in Table 3.?\", \"it's said in line 316 that, the normal prior is introduced for evaluation on the ScanNet dataset. But the metrics in Table 2. and Table 3. seem inconsistenct regarding the results of 2DGS + FDS. Does this mean the ScanNet result in Table 2. doesn't use the normal prior?\", \"Can 2DGS + FDS outperform 2DGS + Normal in Table 3.?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors (2)\", \"comment\": \"**[Q4]: Why do both mono-depth and multi-view depth seem to only worsen the results, as shown in Table 3?**\\n\\n**[A4]:** We initially tested the scale and shift-invariant loss used in\\nRobust Nerf [1] for monocular depth supervision.\\nThis method has limited ability to predict absolute depth estimation, which in turn only worsen the result of geometry reconstruction.\\nWe have updated our results using a more advanced strategy for monocular depth supervision, as employed in Mono-SDF [2]:\\n\\n$$\\n L_{depth}=\\\\sum_{r \\\\in R} ||(w^iD^i + q^i) - \\\\hat{D}^{i}||^2\\n$$\\n\\nwhere $w^i$ and $q^i$ are the scale and shift used to align mono-depth with the absolute depth of input view $i$, calculated using the least squares method.\\n$\\\\hat{D}^{i}$ is monocular depth prediction.\\nThe updated results on the Mushroom dataset in Table 3 are presented below.\\n\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|-------------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 |\\n| 2DGS+Depth | 0.0862 | 0.0702 | 0.0782 | 0.8153 | 0.5965 | 0.0672 | 23.92 | 0.8227 | 0.2619 |\\n| 2DGS+Normal | 0.0939 | 0.0637 | 0.0788 | 0.8359 | 0.5782 | 0.0768 | 23.78 | 0.8197 | 0.2676 |\\n| 2DGS+FDS | **0.0615** | **0.0534** | **0.0574** | **0.8151** | **0.6974** | **0.0561** | **24.06** | **0.8271** | **0.2610** |\\n|-------------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS+Depth+FDS | 0.0561 | 0.0519 | 0.0540 | 0.8295 | 0.7282 | 0.0454 | **24.22** | **0.8291** | **0.2570** |\\n| 2DGS+Normal+FDS | **0.0529** | **0.0450** | **0.0490** | **0.8477** | **0.7430** | **0.0443** | 24.10 | 0.8283 | 0.2590 |\\n| 2DGS+Depth+Normal | 0.0695 | 0.0513 | 0.0604 | 0.8540 | 0.6723 | 0.0523 | 24.09 | 0.8264 | 0.2575 |\\n|-------------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS+Depth+Normal+FDS | **0.0506** | **0.0423** | **0.0464** | **0.8598** | **0.7613** | **0.0403** | **24.22** | **0.8300** | **0.0403** | \\n\\nFrom the results, it can be seen that depth order\\ninformation provided by monocular depth \\nimproves reconstruction accuracy. Meanwhile,\\nour FDS achieves the best performance, and by integrating \\nall three components, we obtained the optimal results.\\nFor multi-view depth, we use L1 loss to supervise rendered depth.\\nThe results of multi-view depth remain unreliable due to the \\nlimited overlap between input views, which is not accounted \\nfor in their training datasets. We test average \\n\\\"Abs Rel\\\" of multi-view depth prior. The result is 0.19, which performs \\nworse than the results \\nrendered by original 2DGS whose \\\"Abs Rel\\\" is 0.10.\\n\\n**[Q5]: Inconsistency regarding the results of 2DGS + FDS between Table 2 and Table 3.**\\n\\n**[A5]:** Thank you for your question.\\nIn Table 3, we use the Mushroom dataset for the ablation study, \\nwhile Table 2 presents the results of our model on the ScanNet dataset which\\nis a different dataset.\\nTherefore, the data in Table 2 and Table 3 are not directly consistent.\\n\\n**[Q6]: Can 2DGS + FDS outperform 2DGS + Normal in Table 3?**\\n\\n**[A6]:** Thank you for your question. Our 2DGS + FDS can achieve better performance than\\n2DGS + Normal. The results on the Mushroom dataset are shown below. \\n We have supplemented more comprehensive results in Table 3. It is worth noting \\nthat prior normals can provide more accurate geometric orientations as indicated by 'NC', \\nwhile our FDS enables more precise geometric positioning as indicated by other metrics. \\nThese two improvements are largely complementary to each other.\\n\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS+FDS | **0.0615** | **0.0534** | **0.0574** | 0.8151 | **0.6974** | **0.0561** | **24.06** | **0.8271** | **0.2610** |\\n| 2DGS+Normal | 0.0939 | 0.0637 | 0.0788 | **0.8359** | 0.5782 | 0.0768 | 23.78 | 0.8197 | 0.2676 |\\n\\n[1]. Liu, Yu-Lun, et al. \\\"Robust dynamic radiance fields.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[2]. Yu, Zehao, et al. \\\"Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction.\\\" Advances in neural information processing systems 35 (2022): 25018-25032.\"}", "{\"comment\": \"Thank you for the efforts in the rebuttal. You addressed most of my concern with the DTU experiment the comparison to Neuralangelo and the ablation on the optical flow gradients. For the comparison on DTU you can further compare to NeRF-based methods. All in all, I raise my score to a borderline accept (6).\"}", "{\"comment\": \"Thank you for your comments and support.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We would like to sincerely thank you for your valuable feedback and the time you've dedicated to reviewing our paper. As the extended discussion phase is now nearing its end, we would greatly appreciate your feedback on our revisions. Please let us know if there are any remaining issues we can address.\\n\\nThank you once again for your support and effort!\\n\\nBest regards\\n\\nFlow Distillation Sampling Authors\"}", "{\"comment\": \"We hope our response has addressed your questions. As the discussion phase is coming to a close, we are looking forward to your feedback and would like to know if you have any remaining concerns we can address.\\nWe are grateful if you find our revisions satisfactory and consider raising your score for our paper.\\n\\nThank you once again for the time and effort you have dedicated to reviewing our paper.\\n\\nBest regards\\n\\nFlow Distillation Sampling Authors\"}", "{\"summary\": \"The paper proposes a optical flow based regularization for 3D and 2D Gaussian Splatting. It compares the optical flow output between an input view and a sampled unobserved view to a so-called radiance flow determined from the camera motion and the reconstructed scene. The authors claim that a loss on the difference between the radiance flow an the optical flow results in improved geometry and view synthesis quality. Experiments show improved performance of 3DGS and 2DGS with flow distillation on the SceneNet, Mushroom and Replica dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper describes an interesting idea on incorporating a flow prior for 3D reconstruction with Gaussian Splatting.\", \"The related works contains all relevant geometry reconstruction-based 3DGS works and set them in context to the proposed method.\", \"The approach is simple and easy to understand, as Figure 1 and 2 are well done and intuitive.\"], \"weaknesses\": [\"The experimental evaluation only considered indoor room datasets, MuSHRoom, ScanNet and Replica. Baseline methods usually use more diverse datasets such as DTU [Jensen et al. 2014], Tanks and Tem-\", \"ples [Knapitsch et al . 2017] and Mip-NeRF360 [Barron et al. 2022].\", \"The authors found that the depth distortion loss in 2DGS degrades the results. However they do not provide evidence or explanations for that and it is unclear how this influenced the quantitative comparison to 2DGS.\", \"The paper solely focus on 3DGS-based methods in the related work and also in the experimental evaluation. Comparisons to neural field based methods such as Geo-NeUS or NeuralAngelo would significantly strengthen the claim of state-of-the-art performance.\", \"The overall quality and the presentation should be improved, e.g. the conclusion is unspecific and contains general claims, inconsistent capitalization of 'Gaussian', no explanation of \\\\hat{\\\\alpha} in equation 2.\"], \"questions\": [\"The Radiance Flow maps pixels from the source view to the target view. Considering pixels in the target view containing splatted Gaussians that are occlude in the source view, how is the Radiance Flow computed for these regions?\", \"In line 267, how does detaching the optical flow influence the overall performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"no concerns\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We are very delighted to receive your response and suggestion! Thank you for raising the rating and for your support.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thank you for your response. The proposed flow supervision method demonstrates potential for various reconstruction tasks with limited observations, as evidenced by the provided experiments. I will recommend accepting this paper.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thank you for the detailed response and the additional experiments. All my previous concerns have been addressed. However, I agree with Reviewer 148K that the proposed radiance flow supervision appears not to be limited to 3DGS. It would be valuable to explore whether the proposed methods can be applied to improve NeRF baselines, e.g. Mip-NeRF 360, by addressing issues like floaters. Incorporating these discussions would strengthen the techinical contributions of the paper. I will consider raising my score if the proposed method can also enhance NeRF baselines.\"}", "{\"title\": \"Rebuttal by Authors (2)\", \"comment\": \"**[Q5]: Considering pixels in the target view containing splatted Gaussians that are occluded in the source view,\\nhow is the Radiance Flow computed for these regions ?**\\n\\n**[A5]:** From the source view to the target view, we compute radiance flow using the following equation, based on depth warping:\\n\\n$$\\n D^n(u_2 , v_2)\\n \\\\begin{bmatrix}\\n u_2\\\\\\\\\\n v_2\\\\\\\\\\n 1\\n \\\\end{bmatrix} = KT_{m}^{n}K^{-1}D^m(u_1 , v_1) \\n \\\\begin{bmatrix}\\n u_1\\\\\\\\\\n v_1\\\\\\\\\\n 1\\n \\\\end{bmatrix}\\n$$\\n\\nSpecifically, we back-project the depth map rendered from the input view into 3D space and calculate the 2D positions of these 3D points in the sampled view using the relative camera poses. The radiance flow is then obtained by subtracting the original pixel positions from these 2D positions. This method allows us to compute the radiance flow for all pixels, including those in self-occluded regions.\\nThe accuracy of prior flow tends to be inaccurate in occluded regions, calculated via pixel matching. However, thanks to the random sampling introduced by FDS, for any single training viewpoint, we effectively generate a sufficient number of sampled viewpoints during training.\\nThis acts as a form of model ensemble, helping to average out errors in occluded regions, as pixels that are self-occluded in one sampled viewpoint can still be observed from others, thereby mitigating these errors.\\n\\n\\n**[Q6]: In line 267, how does detaching the optical flow influence the overall performance?**\\n\\n**[A6]:** Detaching the optical flow can help to reduce training time and improve performance. We have tested the results of FDS on the Mushroom dataset without detaching the optical flow. We found that the rendered results tend to corrupt, and the training on the 'honka' scene runs out of memory on a 4090D GPU.\", \"we_report_the_compared_results_of_the_last_four_scenes\": \"| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 |\\n| 2DGS+FDS (without detach) | 0.1622 | 0.1999 | 0.1811 | 0.6979 | 0.2376 | 0.1680 | 17.23 | 0.6459 | 0.5593 |\\n| 2DGS+FDS | **0.0567** | **0.0485** | **0.0526** | **0.8202** | **0.7188** | **0.0509** | **24.47** | **0.8377** | **0.2547** |\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q1]:This paper lacks evaluation on widely used geometry reconstruction and novel view synthesis benchmarks such as DTU, Tanks and Temples, and MipNeRF 360.**\\n\\n**[A1]:** Thank you for your advice. \\nSince DTU is an object-level dense observation dataset, \\nour FDS is primarily designed to mitigate the issue of insufficient \\nsampling in observation regions. \\nTherefore, we did not prioritize testing on the DTU dataset initially.\\nWe have added the results of our FDS on the DTU dataset, which are shown below.\\nWhile FDS performs better under sparse observations, it still achieves notable improvements on the DTU dataset with dense observations.\\n\\n| Method | 24 | 37 | 40 | 55 | 63 | 65 | 69 | 83 |\\n|---------------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| 2DGS | **0.48** | 0.86 | 0.36 | 0.43 | 0.90 | **0.94** | 0.80 | 1.27 |\\n| 2DGS+FDS | 0.51 | **0.85** | 0.36 | 0.43 | **0.79** | 1.00 | **0.77** | **1.23** |\\n\\n| Method | 97 | 105 | 106 | 110 | 114 | 118 | 122 | mean |\\n|---------------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| 2DGS | 1.30 | **0.72** | 0.70 | 1.24 | 0.47 | 0.70 | 0.58 | 0.78 |\\n| 2DGS+FDS | **1.06** | 0.73 | **0.65** | **1.14** | **0.44** | **0.58** | **0.53** | **0.73** |\\n\\n**[Q2]:While the metrics in Table 3 appear strong, \\nit\\u2019s unclear why the loss significantly outperforms multi-view depth supervision, \\nwhich does not suffer from inaccurate prior flow. \\nMore explanation and analysis are needed to clarify this point.**\\n\\n**[A2]:** Thank you for your question. \\nFirst, the Mushroom dataset has limited overlap between input views, making multi-view depth estimation unreliable.\\nIn contrast, our prior flow is extracted from a controlled overlap between the input view and a sampled unobserved view, which is more reliable than offline multi-view depth from uncertain input views.\\n\\nSecond, to reduce inaccuracies caused by relatively blurry sampled views, FDS adopts a random sampling strategy.\\nFor any single input viewpoint, FDS effectively \\ngenerates a sufficient number of sampled viewpoints \\nduring training. This acts as a form of model ensemble, \\nhelping to average out errors.\\nTo validate this claim, we conducted experiments using fixed sampling viewpoints on the Mushroom dataset instead of random sampling. The results showed \\na significant decline in performance, highlighting the \\nimportance of this approach.\\n\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|-------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 |\\n| 2DGS + FDS (fixed sampling) | 0.0729 | 0.0617 | 0.0673 | 0.8015 | 0.6312 | 0.0724 | 23.97 | 0.8260 | 0.2623 |\\n| 2DGS + FDS | **0.0615** | **0.0534** | **0.0574** | **0.8151** | **0.6974** | **0.0561** | **24.06** | **0.8271** | **0.2610** |\\n\\n\\n**[Q3]:Given that RAFT is computed at every time step, \\nhow does the training time for 2DGS + FDS compare to 2DGS?**\\n\\n**[A3]:** We have included our training time analysis in Table 1 of the revised paper.\\nOn the Mushroom dataset, adding the FDS loss led to a half-hour increase in training time. Despite this increase, the overall training time remained comparable to other baselines.\\n\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 | Time |\\n|-------------------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|-------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 | 0.8h |\\n| 2DGS + FDS | **0.0615** | **0.0534** | **0.0574** | **0.8151** | **0.6974** | **0.0561** | **24.06** | **0.8271** | **0.2610** | 1.3h |\"}", "{\"title\": \"Official Comment by Authors (2)\", \"comment\": \"**[Q4]: It is recommended that the authors include a detailed\\nreport on the training and inference times of the model in the experimental section, \\nalong with comparative metrics against other existing methods.**\\n\\n**[A4]:** Thank you for your advice. We have included our training time analysis in Table 1 of the revised paper.\\nOn the Mushroom dataset, adding the FDS loss led to a half-hour increase in training time. Despite this increase, the overall training time remained comparable to other baselines.\\nSince FDS is only applied during the training process, it does not affect the inference time.\\n| Method | Acc \\u2193 | Comp \\u2193 | C-L1 \\u2193 | NC \\u2191 | F-Score \\u2191 | Abs Rel \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 | Time |\\n|------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|-------|\\n| GOF | 0.1812 | 0.1093 | 0.1453 | 0.6292 | 0.3665 | 0.2380 | 21.37 | 0.7762 | 0.3132 | 1.4h |\\n| PGSR | 0.0971 | 0.1420 | 0.1196 | 0.7193 | 0.5105 | 0.1723 | 22.13 | 0.7773 | 0.2918 | 1.2h |\\n|------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|-------|\\n| 3DGS | 0.1167 | 0.1033 | 0.1100 | 0.7954 | 0.3739 | 0.1214 | 24.18 | 0.8392 | 0.2511 | 0.8h |\\n| 3DGS + FDS | **0.0527** | **0.0565** | **0.0546** | **0.8178** | **0.6958** | **0.0568** | **24.76** | **0.8486** | **0.2381** | 1.3h |\\n|------------------|----------|----------|----------|----------|-----------|-----------|---------|---------|----------|-------|\\n| 2DGS | 0.1078 | 0.0850 | 0.0964 | 0.7835 | 0.5170 | 0.1002 | 23.56 | 0.8166 | 0.2730 | 0.8h |\\n| 2DGS + FDS | **0.0615** | **0.0534** | **0.0574** | **0.8151** | **0.6974** | **0.0561** | **24.06** | **0.8271** | **0.2610** | 1.3h |\\n\\n**[Q5]: The review of existing prior art aimed at improving 3DGS performance should be more comprehensive and clearer.\\nThere are sentences with grammatical errors that require careful review and correction.\\nIn Algorithm 1, there are notation errors that need careful checking and correction.**\\n\\n**[A5]:** Thank you for your advice. We have carefully reviewed and revised our paper in the related work and methods part.\\nIn our updated paper, we used colors to highlight the changes.\\nSpecifically, we adopted a more concise and clear discussion approach in the related work section and added a discussion on the application of priors in optical flow models.\\nIn the method section (including Algorithm 1), we used more rigorous notation and corrected some errors.\\n\\n**[Q6]: The comparison of depth reconstruction experiments needs to be supplemented with results from other methods,\\n it is recommended that they include more baseline methods for a more robust evaluation.**\\n\\n**[A6]:** Thank you for your advice. \\nWe have compared the depth reconstruction results in Table 1 and Table 2 of our paper, where a lower \\\"Abs Rel\\\" metric indicates better reconstruction quality. Compared with GOF, PGSR, and 2DGS, which \\nare SOTA 3DGS-based 3D reconstruction methods, our FDS achieves\\nthe best depth reconstruction performance.\\nWe choose 2DGS as the baseline method for our FDS because it performed the best on the Mushroom dataset compared with\\nGOF and PGSR.\\n\\n**[Q7]: In the dataset section, the paper mentions that the authors have evaluated their method on the Replica dataset, but the experimental results are not presented.**\\n\\n**[A7]:** The experimental results of the Replica dataset are shown in our supplementary materials due to page limitations.\\nWe also updated our Replica results using a more advanced optical flow model.\"}", "{\"metareview\": \"The paper presents a method that integrates pretrained matching priors to guide the optimization of 3D Gaussian Splatting (3DGS). This approach leverages pretrained geometric knowledge to effectively enhance both the reconstruction and rendering quality of 3DGS methods. Incorporating a flow prior to 3D reconstruction with Gaussian Splatting is interesting and intuitive. The primary concern in the original submission was the lack of experiments on popular datasets such as DTU and MipNeRF. The rebuttal addressed this by including additional experiments on the DTU dataset, strengthening the evaluation of the proposed method.\", \"additional_comments_on_reviewer_discussion\": \"Several issues were raised in the initial reviews, and the rebuttal effectively addressed most of them.\\n\\nA shared concern is about the limitations of the datasets. In particular, there is a lack of evaluation of some widely used geometry reconstruction datasets, such as DTU. The rebuttal addressed this by including additional experiments on the DTU dataset, where the proposed method demonstrated notable improvements. Additionally, at the reviewer's suggestion, the rebuttal provided comparisons with neural field-based methods, further strengthening the evaluation.\\n\\nAnother concern was the method's dependency on the performance of the pretrained optical flow model. The rebuttal argued that advancements in optical flow models would continuously benefit the proposed method. This was demonstrated by replacing RAFT with SEA-RAFT, which improved performance for the proposed method, illustrating its adaptability to better flow models.\\n\\nThe rebuttal effectively addressed most of the concerns raised during the review process. Most reviewers were optimistic about the paper by the end of the discussion stage.\"}" ] }
Bzro1bgkTQ
Reduced-Order Neural Operators: Learning Lagrangian Dynamics on Highly Sparse Graphs
[ "Hrishikesh Viswanath", "Yue Chang", "Julius Berner", "Peter Yichen Chen", "Aniket Bera" ]
We propose accelerating the simulation of Lagrangian dynamics, such as fluid flows, granular flows, and elastoplasticity, with neural-operator-based reduced-order modeling. While full-order approaches simulate the physics of every particle within the system, incurring high computation time for dense inputs, we propose to simulate the physics on sparse graphs constructed by sampling from the spatially discretized system. Our discretization-invariant reduced-order framework trains on any spatial discretizations and computes temporal dynamics on any sparse sampling of these discretizations through neural operators. Our proposed approach is termed Graph Informed Optimized Reduced-Order Modeling or \textit{GIOROM}. Through reduced order modeling, we ensure lower computation time by sparsifying the system by 6.6-32.0$\times$, while ensuring high-fidelity full-order inference via neural fields. We show that our model generalizes to a range of initial conditions, resolutions, and materials.
[ "Reduced order modeling", "Neural Operator", "lagrangian dynamics", "neural field", "discretization invariance" ]
Reject
https://openreview.net/pdf?id=Bzro1bgkTQ
https://openreview.net/forum?id=Bzro1bgkTQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xvTJhYiOeg", "vxzBbtrp16", "vpDZ7ZD6Vc", "ujUJRROutq", "uiCu1OhRZr", "qna40ImqaQ", "lmqGPKFkcL", "hGpOlDpSO7", "fcrdzQwUX1", "bRUSRry7yi", "Zav8XguRbP", "Z8aBxNK0hQ", "WvpndbqvXk", "RvPOpGIU9M", "NFrmCKJkGd", "Kxt4SaHpmJ", "CGRUUNxtqR", "6SqVCa3Ipw", "1qBp75N1cF" ], "note_type": [ "official_review", "official_comment", "decision", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730704229387, 1732251831982, 1737523880344, 1732647539491, 1734705070688, 1730497262994, 1732251933188, 1732646253424, 1732250682089, 1730212745907, 1732249572326, 1733170396179, 1729929674759, 1732532574129, 1732251636309, 1732252600417, 1732251674274, 1730678829923, 1732572372576 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7991/Reviewer_iRQR" ], [ "ICLR.cc/2025/Conference/Submission7991/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7991/Authors" ], [ "ICLR.cc/2025/Conference/Submission7991/Area_Chair_jYoa" ], [ "ICLR.cc/2025/Conference/Submission7991/Reviewer_LDdW" ], [ "ICLR.cc/2025/Conference/Submission7991/Authors" ], [ "ICLR.cc/2025/Conference/Submission7991/Reviewer_WgNe" ], [ "ICLR.cc/2025/Conference/Submission7991/Authors" ], [ "ICLR.cc/2025/Conference/Submission7991/Reviewer_oDsv" ], [ "ICLR.cc/2025/Conference/Submission7991/Authors" ], [ "ICLR.cc/2025/Conference/Submission7991/Authors" ], [ "ICLR.cc/2025/Conference/Submission7991/Reviewer_eAsE" ], [ "ICLR.cc/2025/Conference/Submission7991/Reviewer_eAsE" ], [ "ICLR.cc/2025/Conference/Submission7991/Authors" ], [ "ICLR.cc/2025/Conference/Submission7991/Authors" ], [ "ICLR.cc/2025/Conference/Submission7991/Authors" ], [ "ICLR.cc/2025/Conference/Submission7991/Reviewer_WgNe" ], [ "ICLR.cc/2025/Conference/Submission7991/Reviewer_LDdW" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a learning-based reduced-order modeling framework for Lagrangian simulation. The proposed model comprises message passing layer and operator transformer. To reduce the computational complexity, the original discretized field is first down-sampled and then reconstructed via linear combination of neural field basis. Extensive numerical experiments on different Lagrangian simulation including fluids and sand are conducted to demonstrate the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Combining a continuous decoder with a neural operator in the latent space is technically sound. The numerical experiments showcase the proposed framework significantly reduces the computational cost while maintaining good accuracy. The continuous decoding strategy also makes it more flexible with different discretizations.\", \"weaknesses\": \"1. Many details about the experiments are vaguely described, which makes it difficult to interpret some of the results presented.\\n For example, in equation 4, how is the coefficient q derived in practice? Is it predicted by another neural network, or is it optimized directly via least-squares? If it is optimized online, then during the inference stage, it will require extra optimization for every reconstruction from reduced mesh to full mesh. In table 4, several baselines are listed but there is no formal definition or a brief introduction of them, for example I could not find the definition of what is IPOT. In table 3, what is the autoencoder? Is it a GNN or it's just a MLP? If it's a GNN then shouldn't it be permutation-equivariant?\\n\\n2. As shown in Figure 1, the reduction part does not contain any learnable parts but rather just rely on non-learnable sampling method, which can potentially result in information loss.\\n\\n3. (Relatively minor) The system considered in the numerical experiments are rather small-scale, all less than 100k nodes/particles.\", \"questions\": \"1. When applying FNO as part of the latent dynamics model, how do you handle the empty voxels/regions?\\n\\n2. The paper compares with full-order model like GNS or other reduce-order model, how does the model compare to multi-grid model like Cao et al. [1]?\\n\\n3. Is it necessary to do the sampling of full-order mesh at every timestep, and why not stay in the latent space with a fixed set of particles sampled in the beginning?\\n\\n\\n[1] Cao Y, Chai M, Li M, et al. Efficient Learning of Mesh-Based Physical Simulation with BSMS-GNN[J]. arXiv preprint arXiv:2210.02573, 2022.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Major Revisions**\\n\\nWe have revised the paper as suggested. However, we welcome any further feedback on any parts of the paper that require improvements in presentation. \\n\\n1. The paper's novelty is either limited or not effectively communicated\\n A. We have re-written the introduction section to better clarify our novelty and distinguish our method from prior approaches. \\n2. Sections 3, 4 - explanations of how they fit together, Section 5 - Additional explanation for experiments. \\n A. We provided explanations at the start of sections 3 and 4, explaining how they relate to Figure 1. The introductory section further distinguishes how the parts explained in sections 3 and 4 differ from prior literature. We furthermore added additional details about the experiments in section 5. \\n\\n3. To improve clarity, the authors should provide a more detailed rationale for each experiment, explaining the intentions behind the choices and the methodologies\\n A. Each experiment has been updated with a paragraph explaining the experimental setup, evaluation metric, and the reasoning for the experiment. \\n \\nQ1. Although the paper claims high fidelity across various initial conditions and resolutions, it does not address complex cases with extreme variability, such as materials undergoing phase transitions or highly turbulent flows. These more complex scenarios, which are generally more challenging to model with reduced-order methods, are not covered in the results presented.\\n\\nA1. Indeed, our current approach can have issues with discontinuities and stress concentration (e.g., shear localization). We will add this to the discussion/limitation section of our paper. Increasing the sampling resolution can partially alleviate the issue, but tradeoffs between accuracy and the number of samples have to be made (See Figure 6). Importance sampling methods that increase sampling density in the singular geometry region can lead to more efficient results (See Contact-centric deformation learning, ACM SIGGRAPH 2022; Optimizing Cubature for Efficient Integration of Subspace Deformations, ACM SIGGRAPH 2008). The SPH-based Newtonian fluid datasets used in the paper exhibit chaotic behavior. We will release the dataset containing these trajectories, similar to what is presented in the GNS paper [Sanchez-Gonzales, 2020]. Modeling phase transitions is a consideration for future work which will solidify reduced-order modeling \\n\\nQ2. What are the limitations of sparse sampling in terms of capturing fine-grained details? Can you provide information of the trade-offs between computational efficiency and the accuracy of capturing system details?\\n\\nA2. We have updated Figure 6 in the new Pdf to contrast accuracy, GPU usage, computation-time against sparsity and graph radius. We observe a decrease from ~7GB to <1GB on 0.031x sampled graphs, while achieving a rollout MSE of the order of 1e-4. \\n\\nQ3. How does the model perform when interpolating or extrapolating to new materials or conditions not represented in the training data? It is mentioned that these cases are challenging but no empirical evidence is provided.\\n\\nA3. We used a large number of trajectories and showed that the model can handle unseen trajectories with unseen conditions, generated by random initial velocity vector generated by random seeds. We also showed that the model can handle multi-material scenarios shown in Table 1. However, the model is unable to generalize to unseen materials because each material is encoded within the encoder and for unseen material, the encoding would be undefined or would be approximated to the nearest known material. \\n\\nQ4. Can you provide a systematic study on the scaling performance with varying graph size for the same dynamical system?\\n \\nA4. Yes, this has been provided in the updated figure 6. We observed lower GPU usage, faster computation time and comparable performance with higher levels of sparsity, upto 0.031x the full-order point cloud (78k particles). Increasing sparsity beyond 0.031x leads to degradation of performance. \\n\\nQ5. Are there specific physics-based scenarios where this reduced-order approach might be less effective?\\n\\nA5. When the system is not bounded and the number of particles in the system is not fixed, the reduced-order representations may fail to capture newer information that was previously not in the system. An example could be a fluid flow system with particles constantly entering and leaving the system of vortices.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We appreciate your feedback and thank you for providing constructive criticism that helped us improve the quality of the paper.\\n\\nQ1. We agree with your assessment that our datasets are not harder than GNS dataset. We apologize for prior miscommunication that our dataset is more difficult to learn. In fact, \\\"This is just the initial state of the point cloud; from there on, it is just a material falling on the floor, very much the same as in GNS\\\", this is precisely what we wanted to verify when we chose different geometries. We used similar setup as GNS dataset for water and sand, choosing the same style of trajectories, duration, with similar boundary conditions, with only difference being the shape of the object because we wanted to confirm the geometry invariance of our model.\"}", "{\"metareview\": [\"The submission deals with Lagrangian simulation and proposes a method based on discretization, downsampling and reconstruction. Five reviewers generally appreciated the paper but raised several weaknesses:\", \"Novelty,\", \"Lack of details, clarity,\", \"the limiting role of farthest point sampling,\", \"complexity of the method and justification of the contributions,\", \"Small scale experiments, simplicity of the problems,\", \"No standard datasets used,\", \"Positioning,\", \"The authors could provide answers for some of these issues, but some important problems remained. The AC sides with the critical reviews and judges that the paper is promising but not yet ready for publication. The decision is mainly based on several critical issues: the refusal to use standard datasets and lacking of justifications of the necessity of sparsification.\"], \"additional_comments_on_reviewer_discussion\": \"The reviewers engaged with the authors, and discussed the paper with the AC.\"}", "{\"summary\": \"This paper develops a technique for simulating physical systems by combining graph-informed ROM and neural operators. First, the initial full-order mesh is coarse-grained to a sparser mesh using the farthest point sampling method. That smaller graph is encoded via an interaction network by performing several message passing, capturing local spatial interactions. Then, the encoded features are embedded into a regular grid, processed with a neural operator transformer and decoded with a second interaction network in order to perform the integration step. Each snapshot computed on the reduced space can be projected back to a full-order system by using a learnt linear basis transformation, which can be efficiently solved with least squares and evaluated at any arbitrary point. The method is applied to several examples in both solid and fluid mechanics.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The method is very versatile, as it is able to handle different types of systems and material models in continuum mechanics problems. It has also very good results in generalization.\", \"The processing pipeline is discretization-agnostic, and the solution can be sampled in an arbitrary level of discretization.\", \"Solving the dynamics in the reduced space makes the algorithm very fast in inference time.\"], \"weaknesses\": [\"The graph construction is computationally demanding.\", \"The method could be problematic with challenging geometries or singular phenomena, in which the farthest point sampling might omit relevant details of the domain.\", \"The pipeline is too complicated and has many hyperparameters, which might be a problem for learning larger-scale systems and more complicated physical behaviours.\"], \"questions\": [\"It is very surprising that the method is very complicated on the encoding stage (going from full order to reduced order) but fairly simple on the decoder stage. In my opinion, the paper has not complitely justified why the use of a graph encoder + interaction network is better than other existing reduction methods. I would suggest the authors to perform an ablation study comparing the current graph encoder approach with a neural field encoding, so there is clear evidence that the use of a sparsified graph and an interaction network is better than a more simple approach.\", \"Could the authors to provide a sensitivity analysis showing how performance and computational cost vary with different values of Q (number of reduced point discretization)? Which are the heuristics or guidelines used for selecting Q in practice? This would help readers understand the tradeoffs involved in selecting this parameter.\", \"Line 164: \\\"This ensures an even distribution of points, [...]\\\". This is an advantage from the computational perspective, but it might be problematic under certain conditions. For example, pressure discontinuities in fluid dynamics or stress concentration in solid mechanics. Those singular regions might not be correctly captured by just using a distance criterion in the farthest point algorithm, and might be advantegous a finer discretization in those areas. Can the authors provide with any discussion/limitations on this regard?\", \"Figure 2: The bunny and cow examples show self-contact between surfaces. Is there any specific condition to handle this situations from the graph generation perspective, and is it handled differently in solids and fluids? Could the authors discuss any limitations on this area?\", \"Figure 3: The figure is misleading. Why is the farthest point graph more dense than the full-order graph? I would expect something like the Delaunay Graph example.\", \"Table 3: Can the authors clarify what \\\"randomizing the indices\\\" mean and its significance in the reduction method?\", \"My guess is that the authors refer to different ordering in snapshots and nodal indices. If that is the case, it is not obvious why that randomization should affect the final result in a relevant way. In fact, POD/PCA decomposition is constructed over the snapshot covariance matrix and it is invariant under any permutation transformation.\", \"Table 3: Can the authors provide more details about the autoencoder baseline and the LiCROM projection, including parameter counts, training times and network topology? Could the authors include a more fair comparison with a closer method, such as its non-linear version (CROM)?\"], \"final_comment\": \"This paper has very promising results and generalization capabilities, but the contributions are not well justified. The real contribution of the paper is the graph sparse reduction together with an interaction operator, as the use of neural operator transformers and LiCROM neural fields is not novel. In the current state of the paper, there is a lack of comparison with other state-of-the-art end-to-end reduction techniques. For this reason, my final rating is marginally below the acceptance threshold, but I would be open to raising it with enough justification of my mentioned points.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"I have no ethics concerns.\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Q1. Only explicit time integrator is considered. Please add motivation for this choice to paper.\\n\\nA1. Indeed, the present work only considers explicit time integration. This is consistent with prior machine\\u2013learning\\u2013based time stepping methods. (See Learning to Simulate Complex Physics with Graph Networks). We agree that it is exciting to work on alternative time integration schemes, such as Runge-Kutta and implicit Euler, in the future.\\n\\nQ2. Please clarify units of duration for Table 1.\\n\\nA2. We have clarified these in the updated PDF. The time interval is 5e-3s per time-step. \\n\\nQ3. Please add information about numerical scheme used in nclaw. Is it the same scheme used for training model?\\n\\nA3. Yes, nclaw also uses an explicit time integration scheme. That said, the training data for the elastic examples (owl-shaped mesh from the LiCROM paper) uses an implicit integration scheme. Since our model works on both of these settings, we show that, our work is compatible with training data of any time integration scheme. \\n\\nQ4. What about stability issues of numerical scheme used? How the time step is chosen?\\n\\nA4. Consistent with other machine-learning-based methods (e.g., GNS, CROM), our approach remains stable even if we take a larger time step than the training data generated using classical numerical methods. However, as the reviewer pointed out, our approach also becomes unstable if too big of a time step is chosen. All our examples are generated with the same time step as the training data, whether they are generated using explicit (sand, nclaw dataset) or implicit scheme (elasticity, LiCROM dataset). We will further clarify in this paper and add a stability analysis (w.r.t. the time step size).\\n\\nQ5. How does the developed approach compare with implicit numerical schemes in terms of stability and time-stepping requirements?\\n\\nA5. As discussed in the previous answer, our approach can take the same time step size as the training data generated using implicit numerical schemes (elasticity, LiCROM dataset). It will be interesting for future work to explore the possibility of integrating implicit time integration with machine-learning-based force evaluation, which, to our knowledge, has not been done before. This will require differentiating the neural operator and graph neural network during runtime.\"}", "{\"comment\": \"Q1: I disagree that your datasets are harder to learn because your initial shapes are actually not so relevant. This is just the initial state of the point cloud; from there on, it is just a material falling on the floor, very much the same as in GNS.\\n\\nQ2/3/4/5/6/7: Thanks.\\n\\nOverall, I agree with reviewer oDsv that your contribution is rather limited - solving the GNS task with a combination of GNO (encoder/decoder) + Transformer (processor) is nothing groundbreaking. However, the sheer amount of ablations (e.g., graph subsampling and construction) is a reason to keep my score at 6 \\\"above the acceptance threshold.\\\"\"}", "{\"comment\": \"Thank you for your feedback\\n\\nQ1. Is there any reason why you didn't use existing 3D datasets, e.g., from Sanches-Gonzalez et al. (2020)? The only subset that is not there is a 3D multi-material one. Looking at the baseline results in your Table 4, GNS and the proposed model perform similarly. Comparability with the baseline numbers from the GNS paper would have been very useful.\\n\\nA1. The NCLAW datasets were generated with complex geometries, allowing us to test discretization invariance and geometry invariance. 3D GNS datasets offer low flexibility for testing these invariance features since their 3D datasets do not have complex geometries. \\n\\nQ2. You say on lines 351-352 that random sampling has a similar performance to FPS, but on L. 329, you say that you do use FPS after all. Having some experience with FPS, I know that it is a sequential process over the point cloud, with the number of iterations being the number of subsampled points. And this sequential nature becomes a bottleneck when working with large point clouds as the presented 3D ones. Why didn't you just use random sampling?\\n\\nA2. We show in Table 13 that the two sampling strategies have comparable performance. While our point-clouds were small enough (<100k points), we used FPS for even distribution of points, on larger point-clouds, random sampling may be used without loss of accuracy. \\n\\nQ3. What do you mean by POD and an autoencoder in Table 3? Do you explain this somewhere? I don't think any of these two are representative baselines. In addition, Table 3 has multiple identical lines, all of which could be summarized in one sentence. Please consider removing this table altogether.\\n\\nA3. For reduced-order methods (ROM), POD and autoencoders are standard baselines. See [Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders] by Lee et al. We have added more explantions and background citations for these two methods. We apologize for the identical lines. We have removed the table from the paper.\\n\\nQ4. Which dataset do you use for Table 3?\\n\\nA4. We have updated the paragraph to include the dataset used for computing the metrics. We used The Owl Dataset (Elasticity) for all the experiments shown in (now removed) Table 3. \\n\\nQ5. Table 3: Did you consider discussing \\\"ROM baselines\\\" (L.431) and \\\"Discretization-agnostic ROM\\\" (L.514) next to each other, so that Table 3 is close to both?\\n\\nA5. Thank you for the recommendation. We have updated it in the revised pdf.\\n\\nQ6. You cite the DINo paper (Yin et al., 2023) and a few similar ones, but I didn't see a discussion of how your approach improves on them. To my knowledge, you could have used the DINo encoder and decoder on the velocity field (and Euler-integrate once), and you would only have had to add an operator transformer in between. Am I missing something?\\n\\nA6. DINo works on grid based inputs. Therefore, it is necessary to include integral transform operation to convert from unstructured pointcloud to a structured grid, before passing it to DINo. We performed experiments with DINo as the processor on the elasticity dataset, in place of NOT, and observed similar performance as NOT, showing that it is a valid time-stepper.\", \"io_dino_io\": \"1-step MSE: 6.25e-10, Rollout MSE: 7e-4\\n\\n\\nQ7. \\n\\nThe following changes have been made in the updated pdf. \\n\\n\\nL. 310, \\\"Ma et al. (2023)\\\" -> \\\"(Ma et al., 2023)\\\" \\nL. 412: please add some space between the captions of subplots (a) and (b). Currently, it is unclear that there are two subplots.\", \"table_13\": \"please reduce the font size of this table\\nL. 1048: \\\"our model is at least 5x faster than [GNNs]\\\" doesn't match with the numbers in Tables 9 and 10. It is rather a 2-4x speedup. Please reformulate.</b>\"}", "{\"summary\": \"The paper introduces Graph Informed Optimized Reduced-Order Modeling, a neural-operator-based framework designed to accelerate simulations of complex Lagrangian dynamics, such as e.g. fluid flows. GIOROM intends to reduce computational costs by training on sparse graphs sampled from spatially discretized systems, allowing it to simulate temporal dynamics efficiently without depending on full spatial resolution.\\nThe authors emphasize that the proposed framework is discretization-invariant, meaning it can generalize across various spatial discretizations and resolutions. To achieve this, the framework uses a graph-based neural operator transformer to model temporal dynamics on sparse representations and leverages continuous reduced-order modeling with neural fields to reconstruct the full-order solution, enabling evaluation at any spatial point within the system.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This approach shows some innovation by integrating graph-based neural networks and continuous reduced-order techniques, distinguishing it from more conventional dense simulations and the results point out computational gains. This Acceleration while retaining high accuracy can benefit industries where large-scale physics simulations are essential, e.g. engineering.\", \"weaknesses\": \"The paper's novelty is either limited or not effectively communicated, as the work primarily appears to combine approaches from the cited prior studies [Li et al., 2024; Chen, 2024]. The rationale behind the specific design choices and the reasons for the approach's effectiveness are not clearly explained.\\n\\nSignificant revision is needed, particularly in Sections 3, 4 and 5. Sections 3 and 4 introduce the methods used but fail to clarify how the proposed work differs from existing research or to explain how the components fit together cohesively. The introduction is challenging to follow, especially since it doesn\\u2019t clarify how this work advances beyond prior literature. A more logical approach could be to introduce the complete framework as depicted in Figure 1 and follow the flow of this diagram, highlighting any novel contributions.\\n\\nWhile the experimental nature of the work in Section 5 and in the Appendix is valuable, the structure of the experimental section lacks a clear line. It consists of nine figures and tables with minimal explanation. To improve clarity, the authors should provide a more detailed rationale for each experiment, explaining the intentions behind the choices and the methodologies used. Tables and figures might benefit from rearrangement or relocation to the appendix if necessary.\\n\\nAlthough the paper claims high fidelity across various initial conditions and resolutions, it does not address complex cases with extreme variability, such as materials undergoing phase transitions or highly turbulent flows. These more complex scenarios, which are generally more challenging to model with reduced-order methods, are not covered in the results presented.\\n\\nThe research in this work is quite interesting; however, I cannot recommend the paper for acceptance in its current form as major revision is needed.\", \"questions\": \"How does the model handle edge cases with high variability or extreme dynamics? It would strengthen the work to clarify how robust the model is for complex, highly dynamic systems, such as those with rapid phase changes or intense turbulence.\\n\\nWhat are the limitations of sparse sampling in terms of capturing fine-grained details? Can you provide information of the trade-offs between computational efficiency and the accuracy of capturing system details?\\n\\nHow does the model perform when interpolating or extrapolating to new materials or conditions not represented in the training data? It is mentioned that these cases are challenging but no empirical evidence is provided.\\n\\nCan you provide a systematic study on the scaling performance with varying graph size for the same dynamical system?\\n\\nAre there specific physics-based scenarios where this reduced-order approach might be less effective?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your feedback.\\n\\nQ1. In equation 4, how is the coefficient q derived in practice? Is it predicted by another neural network, or is it optimized directly via least-squares? If it is optimized online, then during the inference stage, it will require extra optimization for every reconstruction from reduced mesh to full mesh.\\n\\nA1. Yes, it is optimized directly via a linear least squares. We follow equation 10 from https://arxiv.org/pdf/2310.15907. This boils down to solving a symmetric positive linear system using a single Cholesky factorization. Therefore there is no computationally expensive optimization involved (faster than a neural network evaluation), thereby introducing minimum overhead (See Table 9 upscale time, ~1e-4s). We have added this in the paper.\\n\\nQ2. In table 4, several baselines are listed but there is no formal definition or a brief introduction of them, for example I could not find the definition of what is IPOT. \\n\\nA2. We apologize for this oversight. These models have been defined in the updated Pdf, along with the relevant citations. \\nGNN - Graph Neural Network, GAT - Graph Attention Network, GINO - Geometry Informed Neural Operator, GNOT - General Neural Operator Transformer, IPOT - Inducing Point Operator Transformer. \\n\\nQ3. In table 3, what is the autoencoder? Is it a GNN or it's just a MLP? If it's a GNN then shouldn't it be permutation-equivariant?\\n\\nA3. It is an MLP autoencoder, similar to the ones commonly used in standard reduced-order models. See [Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders] by Lee et al. Our approach, on the other hand, guarantees permutation-equivariance and allows for evaluation at arbitrary locations in space. This has been clarified in the updated Pdf. \\n\\nQ4. As shown in Figure 1, the reduction part does not contain any learnable parts but rather just rely on non-learnable sampling method, which can potentially result in information loss.\\n\\nA4. Yes, information loss is possible. As we discussed in Figure 6 (left), there is a trade-off between accuracy and the number of points sampled. Indeed, it is an exciting future work to explore learnable methods to potentially reduce information loss. Furthermore, the \\u201cpotential information loss\\u201d during training enhances the time-stepper model robustness and accelerates training. For inference, we can take any resolution\\u2014in particular, also omit subsampling.\\n\\nQ5. When applying FNO as part of the latent dynamics model, how do you handle the empty voxels/regions?\\n\\nA5. The FNO is applied to functions on an equidistant grid. The integral transform before and after the FNO allows us to change the discretization from a general point cloud, as done in GINO [Li et al., 2024]. In particular, function values on the latent grid originate from learned aggregations over neighborhoods with a fixed radius, i.e., having more neighbors if the particles are denser.\\n\\nQ6. The paper compares with full-order model like GNS or other reduce-order model, how does the model compare to multi-grid model like Cao et al. [1]?\\n\\n\\nA6. The Interaction Operator presented in the paper can be converted to a multi-level graph model, which, as shown in \\\"Multipole Graph Neural Operator for Parametric Partial Differential Equations, Li et al.\\\", indeed behaves as a neural operator. It is an exciting future research direction to see how the improved time complexity of multi-grid operators can make learning Lagrangian dynamics faster and more efficient. \\n\\nQ7. Is it necessary to do the sampling of full-order mesh at every timestep, and why not stay in the latent space with a fixed set of particles sampled in the beginning?\\n\\nA7. During training, to increase robustness, the inputs are partitioned into slices of 6 time-steps, 5 for input and 1 for output. All 6 frames have the same set of particles. However, other slices may have a different set of particles. During inference, the model predicts a fixed set of particles sampled at the initial time-step over the entire temporal sequence. An additional feature supported by the model, is that the second (decoder) integral transform layer can infer particles that were not in the input graph. This theoretically allows us to have varying set of particles (between input and output) at each time-step. This feature was, however not tested because we saw no application of it in the scenarios shown in the paper.\"}", "{\"comment\": \"Q4. We agree with you that the paper needs a table comparing against baseline MOR architectures. We are currently running experiments to provide the numbers. We apologize that due to the time-constraints, we are unable to provide an entire table as of now, but we do have some results that we would like to share with you.\\n\\nOur work fundamentally differs from recent discretization-invariant MOR (Model-Order Reduction) techniques (e.g., LiCROM, CROM) in the sense that our work is a \\u201cnon-intrusive\\u201d MOR method while those methods are \\u201cintrusive\\u201d methods. This means that even after they are trained, LiCROM and CROM require the PDE solver to time-step during inference. \\n\\nIntrusive methods like LiCROM and CROM require that both the underlying equations are known, including the detailed material parameters. Without explicit knowledge about the equation, these approaches cannot do time integration. Moreover, they also require that the corresponding numerical method code (e.g., FEM, MPM, SPH) used to generate the data is available. Such a requirement prevents them from applying to real-world engineering problems where the underlying equation or the traditional numerical simulation code is unavailable.\\n\\nBy contrast, GIOROM is a \\u201cnon-intrusive\\u201d MOR technique thanks to our neural-operator-based time integration module (i.e., the interaction operator IO + Neural Operator Transformer). With that, we do not require any explicit knowledge of the underlying equation or any material parameters, across the entire pipeline. \\n\\nTo validate this, we performed the experiment where we used the pre-trained LiCROM setup to infer the plasticine system using just the data and no explicit information about the PDE and we show that LiCROM fails to capture the system as effectively, in the absence of Interaction Operator, which can generalize to data and does not require any explicit PDE knowledge. We present the result on the nclaw plasticine dataset: \\n\\nGIOROM \\nroll out 0.00014372089775667015\\n\\nLiCROM without knowledge of physics\\nroll out 0.01592236121586893\\n\\nWe will provide the results on the other systems in the final version of the pdf and apologize for not providing them here due to time constraints.\\n\\nLiCROM (During Inference)\\nKnown PDE -> [Order Reduction] -> FEM/MPM time-stepping with PDE knowledge -> [Increase to FOM with neural field]\\n\\nGIOROM (During inference)\\nUnknown PDE -> [Order Reduction] -> Neural Operator + data input -> [Increase to FOM with neural field]\\n\\nWe provide an anonymized link to some visualizations we have generated to showcase that in the absence of PDE information, LiCROM time-stepping fails\", \"https\": \"//www.dropbox.com/scl/fo/l1kf7le8px45bqbnuyb77/AIa61zqE5A-1iaXh1LUrS6M?rlkey=d5qaf0alar86752n4o67i5gfb&st=o8xk7v2y&dl=0\"}", "{\"summary\": \"The article introduces Graph Informed Optimized Reduced-Order Modeling (GIOROM), a framework designed to accelerate simulations of Lagrangian dynamics\\u2014such as fluid flows, granular flows, and elastoplasticity\\u2014using neural-operator-based reduced-order modeling. GIOROM addresses this by simulating physics on highly sparse graphs sampled from the spatially discretized system, achieving a reduction in input size by 6.6 to 32 times compared to full-order models.\\n\\nTo capture local spatial features of the discretized input, the authors define a graph-based neural operator called the Interaction Operator, which performs two key tasks:\\n\\n1. It uses a discretization-agnostic adaptation of message passing to model interactions between points, regardless of the discretization.\\n\\n2. It leverages a Graph Neural Operator (GNO) layer to project features onto a regular grid, facilitating the construction of a latent space upon which the time-stepper operates.\", \"authors_use_following_scheme\": \"Discretization-agnostic MP -> GNO -> point-wise MLP -> Neural operator transformer -> GNO -> discretization-agnostic MP. This scheme predicts acceleration field in Q-point discretization which is used for computing velocity in t_(n+1) and deformation in t_(n+1) (using explicit Euler scheme) and after that combined loss is calculated between ground truth derformations and velocities (in this discretization). Neural field is trained using reconstruction loss and used for full-order inference (to P-point discretization).Developed approach generalizes well across different initial conditions, velocities, discretizations and geometries.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Developed approach generalizes well across different initial conditions, velocities, discretizations and geometries.\\n\\n2. The paper includes a thorough ablation study, which evaluates the impact of different components of the model. This study helps in understanding how each part contributes to the overall performance, thereby validating the effectiveness of the proposed methods.\\n\\n3.The authors provide strong scientific justifications for their approach, supported by rigorous experiments.\", \"weaknesses\": \"1. Only explicit time integrator is considered. Please add motivation for this choice to paper.\\n2. Sometimes units in tables are not presented.\", \"questions\": \"1. Please clarify units of duration for Table 1.\\n2. Please add information about numerical scheme used in nclaw. Is it the same scheme used for training model?\\n3. What about stability issues of numerical scheme used? How the time step is chosen?\\n4. How does the developed approach compare with implicit numerical schemes in terms of stability and time-stepping requirements?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your answers. I raise my score to 6.\"}", "{\"comment\": \"Q1. The graph construction is computationally demanding.\\n\\nA1. We show in Table 4 that using Graph layers allows us to achieve the best performance since our inputs are irregular and neural operators such as FNO, GNOT, DINO etc. are designed to work on regular grids. GINO does not account for inter-particle interactions, which is crucial for Lagrangian Dynamics. In the absence of which, the models fail to capture spatial information, leading to suboptimal performance. We created computation benchmarks in Figure 6 of the updated PDF which shows how graph size affects the GPU usage and computation time as a function of graph radius and sampling percentage.\\n\\nQ2. Handling discontinuities/limitations with farthest point sampling\\n\\nA2. Indeed, our current approach can have issues with discontinuities and stress concentration (e.g., shear localization). Increasing the sampling resolution can partially alleviate the issue, but tradeoffs between accuracy and the number of samples have to be made (See Figure 6 ). Importance sampling methods that increase sampling density in the singular geometry region can lead to more efficient results (See Contact-centric deformation learning, ACM SIGGRAPH 2022; Optimizing Cubature for Efficient Integration of Subspace Deformations, ACM SIGGRAPH 2008).\\n\\nQ3. Complicated Pipeline, too many hyper-parameters. \\n\\nA3. We agree with the reviewers that this model has several hyperparameters. However, we have shown in the config files that, once tuned, many of these hyperparameters can be considered universal hyperparameters, that work across physical systems, with standard deviation of the noise being one of the few features that changes across them. We require only little system-specific tuning. Also, its flexibility and modularity can also be seen as an advantage of being able to tailor it to more complicated systems (informed choices of subsampling, neighborhood selection, edge information, smoothness prior in the decoder).\\n\\nQ4. Complexity of Encoder (with interaction operator) , Decoder\\n\\nA4. We would like to clarify that the order reduction is done through farthest point sampling and not using learnable techniques. The encoder is used to capture the local spatial interactions. This has been clarified in the updated Figure 1. However, the use of Neural Field as the encoder would speed up the process by removing the message passing. As such we performed an ablation study with NF-NOT-NF setup on the elasticity dataset, where the order-reduction is learnable through the NF encoder. However, we observed that NF was not as efficient as the Interaction Operator in capturing spatial dependencies and interactions. We present the results below:\", \"nf_not_nf\": \"1-step MSE: 1.26e-6, RolloutMSE: 1.35\", \"io_not_io\": \"1-step MSE: 5.07e-10, RolloutMSE: 4e-4\\n\\nQ5. Handling self-contact\\n\\nA5. The training data is generated by the material point method (MPM), which does not explicitly check collision and implicitly handles self-collision through a background grid, for both solids and fluids. We follow a similar approach as MPM as well as the GNS baseline by letting the neural network to implicitly these self contacts. This has been added to the discussion section. Better fine-grained self-contact sampling is an exciting future work direction (See Contact-centric deformation learning, ACM SIGGRAPH 2022). We will add it to the future work.\\n\\nQ6. Figure 3: The figure is misleading. Why is the farthest point graph more dense than the full-order graph? \\n\\nA6. We have updated Figure 3. to include N(odes), E(dges), R(adius) for each of the graphs. FPS based graph has more edges because it uses a larger radius than the Full-order system. The edges connect points that are farther away from each other. Delaunay graph on the other hand has a fixed number of edges, thus making it discretization-dependent. Having edges connecting points further away allows us to efficiently model long-range relationships, which due to the properties of graph layers, is retained if the density increases. However, importantly, if the density decreases, adding new edges by increasing the radius of the input during inference, allows the model to still infer the correct dynamics. This has been shown in Figure 6 (sparsity vs. accuracy). Furthermore, to justify that the speedup is not affected due to the addition of new edges, we show in Tables 9 and 10 that our setup outperforms other time-steppers in terms of speed even with more edges.\"}", "{\"comment\": \"We sincerely thank the reviewers for the valuable feedback they provided, which has helped us improve the clarity of presentation.\\n\\nWe have made major revisions to the paper as suggested by Reviewer oDsv, with the revised sections highlighted in red in the updated PDF. \\nThese include but are not limited to\\n1. Clearly stating the contribution and how it differs from prior work in the introduction\\n2. Providing better explanations at the start of sections 3 and 4, regarding how they relate to Figure 1 and the over all ROM setup. \\n3. Explanation of motivation behind each experiment conducted in Section 5\\n4. Additional discussions that were suggested by the reviewers.\", \"common_responses\": \"Q1. Justification of the architecture and how it differs from prior work. \\nA1. We present a new mathematical formulation - The Interaction Operator, which takes into account inter-particle interactions, used in Lagrangian Dynamics. Prior implementation of Integral Transform [GINO, Li et al, 2024], was designed to work on Eulerian formulations with no temporal dynamics and thus did not take into account local particle interactions. \\n\\nQ2. Vagueness of Experimental Setup\\nA2. This has been addressed in the revised pdf, with explanations provided for all the experiments in section 5 and their corresponding tables.\"}", "{\"comment\": \"Q7. Effect of randomization\\n\\nA7. Thank you for the clarifying question. We have clarified this further in the updated paper. In a nutshell, we do not mean changing the order in snapshots or nodal indices. Rather, we meant that after training, the POD/PCA/autoencoder methods could not evaluate information other than the original mesh on which they were trained. To verify this, we pass in different meshes of the domain (created by randomizing the indices) and confirm that they indeed do not work. In fact, any mesh that is not the original trained mesh would not work with POD. Unlike POD, the neural field-based approach is discretization invariant and can evaluate arbitrary mesh whether they are created by indices randomization or not. We will clarify this in the paper and add additional experiments where the alternative mesh is created through other techniques, for example, different triangulation of the domain.\\n\\nQ8. Details about the autoencoder baseline and the LiCROM projection. ?\\n\\nA8. The details regarding the autoencoder baseline are been provided in the updated PDF. As per the suggestion of Reviewer WgNe, we have removed Table 3, and replaced it with numbers within the discussion. Our method is compatible with CROM as well, i.e., replacing the LiCROM components with CROM. Unfortunately, CROM entails higher computation costs (equation 4 becomes nonlinear least squares and thereby require highly expensive solvers) and struggle with handling highly nonlinear deformations (see comparison between CROM and LiCROM in figure 14 of https://arxiv.org/pdf/2310.15907). \\n\\nQ9. Sensitivity Analysis on Q\\n\\nWe have updated Figure 6 with different analyses on the effect of sampling.\"}", "{\"summary\": \"The paper proposes a neural operator approach to Lagrangian simulations. The core idea is to subsample the original point cloud, then transform it into/from a fixed-sized latent (grid) representation with Message Passing (MP) + Graph Neural Operator (GNO), and in the middle apply a transformer. This model is used to predict the acceleration at each node, which is then numerically integrated to evolve the system. While aligned with Graph Neural Networks (GNNs) for Lagrangian dynamics, the proposed method is discretization-independent and consistently speeds up simulations compared to GNNs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Extensive literature review.\", \"Detailed analysis of graph subsampling and construction implementations.\", \"Many details and ablations of the proposed approach, including an extensive appendix.\"], \"weaknesses\": [\"**W1: self-generated datasets**: Is there any reason why you didn't use existing 3D datasets, e.g., from Sanches-Gonzalez et al. (2020)? The only subset that is not there is a 3D multi-material one. Looking at the baseline results in your Table 4, GNS and the proposed model perform similarly. Comparability with the baseline numbers from the GNS paper would have been very useful.\", \"**W2: Farthest Point Sampling (FPS)**: You say on lines 351-352 that random sampling has a similar performance to FPS, but on L. 329, you say that you do use FPS after all. Having some experience with FPS, I know that it is a sequential process over the point cloud, with the number of iterations being the number of subsampled points. And this sequential nature becomes a bottleneck when working with large point clouds as the presented 3D ones. Why didn't you just use random sampling?\", \"**W3: POD and Autoencoders**: What do you mean by POD and an autoencoder in Table 3? Do you explain this somewhere? I don't think any of these two are representative baselines. In addition, Table 3 has multiple identical lines, all of which could be summarized in one sentence. Please consider removing this table altogether.\", \"**W4: DINo**: You cite the DINo paper (Yin et al., 2023) and a few similar ones, but I didn't see a discussion of how your approach improves on them. To my knowledge, you could have used the DINo encoder and decoder on the velocity field (and Euler-integrate once), and you would only have had to add an operator transformer in between. Am I missing something?\", \"**Minor:**\", \"L. 310, \\\"Ma et al. (2023)\\\" -> \\\"(Ma et al., 2023)\\\"\", \"L. 412: please add some space between the captions of subplots (a) and (b). Currently, it is unclear that there are two subplots.\", \"Table 13: please reduce the font size of this table\", \"L. 1048: \\\"our model is at least 5x faster than [GNNs]\\\" doesn't match with the numbers in Tables 9 and 10. It is rather a 2-4x speedup. Please reformulate.\"], \"questions\": [\"Which dataset do you use for Table 3?\", \"Table 3: Did you consider discussing \\\"ROM baselines\\\" (L.431) and \\\"Discretization-agnostic ROM\\\" (L.514) next to each other, so that Table 3 is close to both?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for the assesment and aclaration of my previous comments. I would like to make some comments about the rebuttal.\", \"q6\": \"I thank the authors for the updated Figure. Now it is more clear that the farthest point graph has less nodes than the full order graph.\", \"q7\": \"Thank you for the clarification, now the explanation makes more sense.\", \"q4\": \"This was my major concern about the paper, and in my opinion it has not been addressed correctly. I understand the motivation of using the IO to account for local interactions, but the paper still lacks of clear evidence that the \\\"reduction + encoding\\\" method is better than any other recent discretization-invariant MOR technique. I would have liked to see in all the examples of the paper, in a similar fashion to Table 5, the performance of GIOROM with respect to LiCROM kind of reduction to clearly show that the sparse graph + IO step is totally necessary for the accuracy of the method.\\n\\nI think that the paper is in better shape after the rebuttal and thank the authors for the changes, but based on the comments above I would like to keep my initial score.\"}" ] }
BzljpHVfmX
An Asymptotic Theory of Random Search for Hyperparameters in Deep Learning
[ "Nicholas Lourie", "He He", "Kyunghyun Cho" ]
Scale is essential in modern deep learning; however, greater scale brings a greater need to make experiments efficient. Often, most of the effort is spent finding good hyperparameters, so we should consider exactly how much to spend searching for them&mdash;unfortunately this requires a better understanding of hyperparameter search, and how it converges, than we currently have. An emerging approach to such questions is *the tuning curve*, or the test score as a function of tuning effort. In theory, the tuning curve predicts how the score will increase as search continues; in practice, current estimators use nonparametric assumptions that, while robust, can not extrapolate beyond the current search step. Such extrapolation requires stronger assumptions&mdash;realistic assumptions designed for hyperparameter tuning. Thus, we derive an asymptotic theory of random search. Its central result is a new limit theorem that explains random search in terms of four interpretable quantities: the effective number of hyperparameters, the variance due to random seeds, the concentration of probability around the optimum, and the best hyperparameters' performance. These four quantities parametrize a new probability distribution, *the noisy quadratic*, which characterizes the behavior of random search. We test our theory against three practical deep learning scenarios, including pretraining in vision and fine-tuning in language. Based on 1,024 iterations of search in each, we confirm our theory achieves excellent fit. Using the theory, we construct the first confidence bands that extrapolate the tuning curve. Moreover, once fitted, each parameter of the noisy quadratic answers an important question&mdash;such as what is the best possible performance. So others may use these tools in their research, we make them available at (URL redacted).
[ "hyperparameters", "hyperparameter search", "hyperparameter tuning", "random search", "evaluation" ]
Reject
https://openreview.net/pdf?id=BzljpHVfmX
https://openreview.net/forum?id=BzljpHVfmX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zmQ2p5wp2M", "vRvMLnWIs5", "ra3UWGnIqL", "g1TJiWrMVI", "dka9L42hI1", "aaKET7PHpL", "Zt3yh0YgeV", "Z8zveZbrJo", "WXJ5vhq6u8", "TZVfqA7hZh", "Llt1FOtUKB", "L2hQra7fhp", "J1e0FoeakP", "IRK1yMlQHa", "G0e6Ef0HDt", "FVPzsvuiaJ", "F6v44ByM80", "E3tM8s9Vjd", "DhbUA8a1zy", "7a4kYzkhe5", "2LspMO0Daw" ], "note_type": [ "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732731715603, 1730493195368, 1732732740472, 1734902665990, 1733184786370, 1732733081390, 1732760183278, 1733296448120, 1732654698350, 1730455058648, 1732733950942, 1730681805643, 1732735196627, 1732735247049, 1732733241447, 1732730509550, 1730821661544, 1737524229711, 1732734546156, 1732734197782, 1732707563595 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Reviewer_FayV" ], [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Area_Chair_zKSn" ], [ "ICLR.cc/2025/Conference/Submission13008/Reviewer_FayV" ], [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Reviewer_FayV" ], [ "ICLR.cc/2025/Conference/Submission13008/Reviewer_NG9w" ], [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Reviewer_m19U" ], [ "ICLR.cc/2025/Conference/Submission13008/Reviewer_TdXJ" ], [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Reviewer_TdXJ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Authors" ], [ "ICLR.cc/2025/Conference/Submission13008/Reviewer_TdXJ" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the references on state-of-the-art hyperparameter tuning, we have incorporated them into our related work. As clarified in our general response: hyperparameter tuning algorithms are not the subject of our work; nonetheless, some readers might find those references interesting. We should also clarify: we never claimed HyperBand or ASHA to be state-of-the-art, rather we intended that they are strong algorithms which are still compared to it (for example, in your reference: [4]). To make this unambiguous, we have implemented your suggestion and revised the language from \\u201cnear state-of-the-art\\u201d to \\u201cobtaining high performance\\u201d.\\n\\nBefore addressing your other concerns, we emphasize again: our work does not propose random search as an alternative to state-of-the-art and it does not seek the best hyperparameter tuning algorithm; rather, our work develops better tools for the design and analysis of deep learning experiments, and offers a better understanding of random search itself\\u2014including its limitations. It would be interesting to develop similar tools for other hyperparameter tuning algorithms; however, that is beyond the scope of this work.\", \"to_address_your_individual_concerns\": \"*No baselines are considered*: The main task on which we evaluate is estimating and extrapolating confidence bands for tuning curves. We compare to Lourie et al. (2024) which, at present, offers the only confidence bands for tuning curves. We do not compare random search to other hyperparameter tuning algorithms as this is both out-of-scope and unnecessary&mdash;many such comparisons already exist in the literature.\\n\\n*Limited experiments*: As you point out, we evaluate on \\u201cdiverse deep learning models\\u201d spanning both vision and language. Nevertheless, empirical claims can always be bolstered by more experiments. Accordingly, we have added new experiments with AlexNet [1] and ConvNext [2] in Appendix D. With these additions, we cover over a decade of advancements in architecture, three models from vision, two models from language, and both pretraining and supervised fine-tuning. Combined with our theoretical proofs, this offers substantial evidence for our claims.\\n\\n*Related work is outdated*: As discussed above, we have addressed your concerns about the related work and updated it with the references you provided.\\n\\n[1]: A. Krizhevsky, I. Sutskever, G. E. Hinton. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, 25.\\n\\n[2]: Z. Liu, H. Mao, C. -Y. Wu, C. Feichtenhofer, T. Darrell and S. Xie. (2022). A ConvNet for the 2020s. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11966-11976.\"}", "{\"summary\": \"The paper describes a theory of tuning curves for random hyper-parameter tuning near the optimum under a smoothness assumption.\\nA parametric form of the tuning curve is described based on a novel description of the distribution of outcomes, and confirmed on three deep learning models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Hyper-parameter tuning is still a critical aspect of deep learning research and practice, and is understudied in the ML community.\\nThe paper proposes a concise methodology to understand the asymptotics of randomized search, with clear predictive capabilities.\\n\\nThe paper is well structured and clearly written.\", \"weaknesses\": \"I'm skeptical of a core assumption of the paper, which is whether the asymptotic regime is relevant in practice. If the function is smooth, and well-approximated by a quadratic locally, then clearly random search is not the right tool. Using a GP would provide immense benefits if the local smoothness assumption holds, and the search is \\\"near the optimum\\\".\\nThe main reason that random search is so successful is that in practice, many areas of the search space are not smooth, and jumps are common.\\nI'm quite surprised by how smooth the tuning curves in figure 1 and 3 are, and they are very unlike tuning curves I have seen with random search, which often stay constant for a long time, and don't progress for 10 or more iterations.\\n\\nThis might be due to the architectures used being extremely well understood, and, potentially, as the authors point out, easy to tune.\\nI would be quite curious to see if these results hold when tuning an MLP on, say the AutoML benchmark, or TabZilla.\\n\\nGiven the smoothness observed in the experiments given in this paper, I would be very interested to see how the tuning curve for TPE or a GP would look in these cases.\\n\\nI did not study all the mathematics in detail. Given the assumptions the formulation seems reasonable; my main concern is about the assumptions and practical utility of the tool.\", \"questions\": \"How are the empirical confidence bands estimated in Figures 1 and 3?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Separate from our rebuttal above, we are happy to respond to your questions:\\n\\n*The region of relevant hyperparameters converges about the optimum*: You disagree that the \\u201cregion of relevant hyperparameters\\u201d converges about the optimum because random search does not adapt. This objection comes from a misunderstanding of what was meant by relevant hyperparameters. In the sentence before the quote (line 180), we defined it as: \\u201cthe [hyperparameters] better than the best you have seen so far\\u201d. Thus, the region of relevant hyperparameters is not a property of the algorithm at all, but how far the search has progressed. Under general conditions, the set of hyperparameters better than the current best does indeed converge about the optimum. We have added a formal proof of this (Proposition E.1). Based on your feedback, we have revised the language to be more specific and included a formal proof of our limit theorem in Appendix E.\\n\\n*\\u201c... being close to the optimum, would require a very very large number of trials in a continuous search space of D dimensions\\u201d*: This statement is mostly true, except the speed of convergence depends not on the dimension of the space but the *effective* dimension of the loss surface. In fact, our theory shows this and quantifies exactly how this dimension affects random search. In all five practical scenarios we considered, this dimension was low: 1 or 2.\\n\\n*\\u201cThe work defines the asymptotic regime as the hyperparameters that we care about the most, those close to the optimum (Line 81).\\u201d*: Thank you for bringing this up, our language here was too imprecise. We define the asymptotic regime as the point where the asymptotics determine the behavior of random search. It is a bias-variance trade-off between how good the Taylor approximation is and how many data points you have to fit the distribution. We replaced this description with a more specific one (line 182).\\n\\n*\\u201c... did the authors order the random search trials by performance? \\u2026 The curve looks like a curve that is generated from training a model\\u201d*: As explained in Section 2.1: Formalizing Random Search, we estimate the *median* tuning curve. While individual runs of random search display flat regions, the median of all such runs will be smooth because the probability of having found a good configuration increases with each iteration. The curves are not from training a model, and their construction is described in detail in Section 3. The median tuning curve is a statistical quantity, so we use a large sample of 1,024 iterations to estimate it. We only visualize the first 70 to 100 iterations to better show the curves\\u2019 structure, since they are essentially flat past that point.\\n\\n*\\u201c... practitioners tend to use multi-fidelity based methods that are model-based\\u2026\\u201d*: As discussed in our general response, random search (along with grid search) remains one of the most common hyperparameter tuning methods in practice for deep learning research. In the Llama 3.1 report from this year with over 200 core contributors, these are the only two hyperparameter tuning methods mentioned.\\n\\n*\\u201cHow many data points (HPO trials) are needed\\u2026 to accurately reflect the tuning curve\\u201d*: The answer to this question is necessarily subjective since it depends on the desired level of accuracy, but to get a sense of it we show fits obtained using 48 data points in Figure 6.\"}", "{\"metareview\": \"In this paper, the authors establish a more formal understanding of random search as a method for tuning hyper-parameters of deep learning models, showing that it converges to a \\\"noisy quadratic\\\" distribution. There are well established methodologies for performing experimental design for hyper-parameter tuning when experiments are noisy. However, grid search and random search remain popular - presumably due to their simplicity and robustness to noise, non-stationarity, etc. Therefore, having a better understanding of random search seems useful. The reviewers found the new theoretical view novel, and the paper clear and insightful. However, the reviewers all voted to reject the paper (3, 3, 5, 5). Multiple reviewers seemed to question whether the proposed theory was relevant to practice - i.e. that the \\\"asymptotic regime\\\" was a reasonable assumption in practice. Also that if the assumptions held, i.e. if the underlying function was smooth and well-approximated by a quadratic, then other tools would be more appropriate than random search. Other reviewers also asked for comparison to other hyper parameter tuning methods. (Note, the authors argue that comparing different tools was not a claim or objective of the paper - rather than understanding random search). Reviewers also asked for theoretical justification of the noisy quadratic assumption, and more theoretical justification in general. The authors seem to have added this in the response, but in the appendix. Finally, although the reviewers seemed to believe that the experiments supported the theory, they had concerns about whether the results would generalize across architectures, and questioned whether the search spaces were unusually well suited to random search.\\n\\nWhile the paper seems insightful and relevant to how hyper-parameter tuning is done in practice, the reviews in sum seem to suggest that the paper is not quite ready for publication. The scores would place the paper well below the threshold for acceptance. Therefore, the recommendation is to reject. However, there seems to be a good start here. Given the reviews, it seems like it would be useful to establish, given the theory, when random search might be more appropriate than a model-based approach (high parallelism? lots of noise and non-stationarity?) and then provide some insight on best practices? Hopefully the reviews will be useful to strengthen the paper for a future submission.\", \"additional_comments_on_reviewer_discussion\": \"There was some discussion between the reviewers and authors. In particular, the authors seemed to feel that the reviewers were missing the point of the paper - i.e. to provide some theoretical insight into random search rather than make claims about its performance compared to other methods. In my view, if this was a consensus takeaway from the reviewers, then the narrative of the paper needs to be changed.\\n\\nThere were multiple questions about whether the smoothness assumption and quadratic assumption were valid? The authors provided some theory in the response which seemed to convince one reviewer to raise their score (after they dropped it after reading other reviews). However, the reviewers all kept their scores below the accept threshold.\\n\\nThere were also concerns about related work in that the state-of-the-art methods for hyper-parameter tuning weren't included in discussion. The authors agreed to include these in the paper, but noted that they were not aiming to compare methods.\"}", "{\"comment\": \"Thank you for the clarification. Indeed, there was a misunderstanding on the distribution under investigation. P(Y>y) indeed converges. What is still surprising is how fast it converges in your experiments; this is highly counter-intuitive, as is the low dimensionality of the effective number of hyper-parameters.\\nThank you for including the additional experiments, indeed these make the empirical results much more compelling in my view.\\n\\nCan you explain how the median of all runs (i.e. ground truth in Figure 6) is computed from the results? Each ordering of the 1024 runs provides a different curve, correct? It's not immediately obvious how to compute the probability of improvement from this.\\n\\nSimilarly, for 4.3, it's a bit unclear to me how the model was fit to the subset of 48 iterations, since again each ordering of the iterations would give a different tuning curve. \\n\\nEven 48 iterations would be a substantial time to wait for an initial experiment, and for such a budget, a more advanced tuning strategy is likely beneficial. \\n\\nI'm changing my rating back to my original rating given the clarification. I'm not yet convinced of the practicality of the approach, but I'm happy to adjust my rating based on the author response.\"}", "{\"comment\": \"Thank you for observing that our first-principles approach leads to a \\u201cclean and empirically compelling model of random hyperparameter search\\u201d, as well as our \\u201clarge amount of statistical and empirical validations\\u201d that \\u201cdemonstrate the efficacy of this distribution for modeling random hyperparameter search\\u201d. To provide even more evidence, we have complemented the paper\\u2019s informal derivation with formal proofs in Appendix E.\", \"you_bring_up_a_great_point\": \"while our theory provides a deeper understanding of random search, what is the practical impact? Random search is not the most efficient tuning algorithm; however, our goal is not to develop efficient tuning algorithms, but rather to provide better tools for the design and analysis of deep learning experiments.\\n\\nIn experiments, researchers need to iterate quickly so they often evaluate a single batch of hyperparameter configurations in parallel. If an idea is obviously better or worse than the baseline, then hyperparameters need not be fully tuned at this exploratory stage. As a result, grid search and random search are quite common. Our theory enables these practitioners to estimate how much more performance might increase if they kept tuning\\u2014for example, if they decided to use a better tuning algorithm. In Section 4.3, we demonstrate this application by extrapolating tuning curves. Based on our proofs (Appendix E) and empirical results (Section 4 and Appendix D), our theory provides a statistically rigorous foundation for such analyses.\\n\\nIn addition, though random search is not state-of-the-art, it remains a common hyperparameter tuning algorithm. Our asymptotic theory identifies the key determinants of random search\\u2019s performance, and predicts how it will progress based on them. The most important one is the effective number of hyperparameters: $\\\\gamma$. When $\\\\gamma > 3$, random search will not be very effective. In this way, our analysis provides a better understanding of random search and, in particular, its limitations. These limitations explain when and why advanced algorithms like Bayesian optimization outperform random search. While these algorithms are out of scope for our current work, it would be interesting for future work to extend the theory and analyze this more complicated case.\"}", "{\"comment\": \"Thank you very much! We greatly appreciate your additional consideration.\"}", "{\"comment\": \"Thank you for reviewing our updates! We\\u2019re glad you found the additional evidence compelling. Your questions highlight some important points, and we will make sure to discuss them in our paper.\\n\\n> What is still surprising is how fast [the asymptotic approximation] converges in your experiments; this is highly counter-intuitive, as is the low dimensionality of the effective number of hyper-parameters.\\n\\nThis counter-intuitiveness probably comes from how the use case changes the context. AutoML often considers complex search spaces with the goal of building the best model. In contrast, our work considers how best to analyze a common kind of deep learning experiment&mdash;the kind where a researcher compares a new model against a baseline. In this setting, the search space is more regular and the asymptotic approximation converges quickly; however, it\\u2019s possible (perhaps even expected) that convergence would be different in the AutoML setting. As this setting is not the focus of our investigation, it's left to future work. We'll make sure to mention this under the limitations.\\n\\n> Can you explain how the median of all runs (i.e. ground truth in Figure 6) is computed from the results? Each ordering of the 1024 runs provides a different curve, correct?\\n\\nCorrect, each ordering produces a different curve; thus, we follow prior work and estimate the *pointwise* median, separately at each x-value. The resulting curve has the following interpretation: if you evaluate x hyperparameter configurations, then you have a 50% chance of doing better than y.\\n\\nTo compute the ground truth, we use the method proposed in [1] for estimating median tuning curves. Intuitively, it:\\n\\n1. Resamples the search iterations with replacement to produce many replicates.\\n2. Selects the validation score in the i\\u2019th position from each replicate.\\n3. Takes the median of these i\\u2019th positions across all the replicates.\\n\\nIt turns out you can compute this without brute-force. A full description is provided in [1], but we\\u2019ll summarize it. If $Y_i$ is the score from the i\\u2019th iteration of random search, then we denote the best score after $k$ rounds by: $T_k = \\\\max_{i=1}^k Y_i$. We have the following fact about $T_k$:\\n\\n$$\\\\mathbb{P}(T_k\\\\leq y) = \\\\mathbb{P}\\\\left(\\\\max_{i=1}^k Y_i\\\\leq y\\\\right) = \\\\mathbb{P}(Y_1\\\\leq y\\\\land\\\\ldots\\\\land Y_k\\\\leq y)$$\", \"since_each_round_of_random_search_is_independent_and_identically_distributed_this_equals\": \"$$\\\\mathbb{P}(T_k\\\\leq y)=\\\\prod_{i=1}^k\\\\mathbb{P}(Y_i\\\\leq y)=\\\\mathbb{P}(Y\\\\leq y)^k$$\\n\\nSo, the median of $T_k$ is the value such that $\\\\mathbb{P}(Y\\\\leq y)^k = 0.5$. Letting $F(y) = \\\\mathbb{P}(Y\\\\leq y)$ and solving for $y$ gives: $y = F^{-1}(0.5^{1/k})$. We then let $F$ be the distribution from resampling with replacement (the *empirical distribution*).\\n\\nNote that the search iterations\\u2019 order doesn\\u2019t actually matter. Since random search samples hyperparameters independently, the results are just a sample from some fixed distribution.\\n\\n> Similarly, for 4.3, it's a bit unclear to me how the model was fit to the subset of 48 iterations, since again each ordering of the iterations would give a different tuning curve.\\n\\nSince the iterations\\u2019 order doesn\\u2019t matter in random search, we sampled 48 iterations without replacement from the full 1,024. The ground truth is still estimated with all 1,024 iterations, but the theoretical and empirical estimates only use the subsampled 48. The empirical estimate uses the 48 iterations to estimate the empirical distribution as in [1]. The theoretical estimate uses the 48 iterations to fit the noisy quadratic distribution. Both then compute the tuning curve via the formula $y = F^{-1}(0.5^{1/k})$ (see Section 3 of the paper for more details).\\n\\n> Even 48 iterations would be a substantial time to wait for an initial experiment, and for such a budget, a more advanced tuning strategy is likely beneficial.\\n\\nThe main advantage of random search is that it runs in parallel. With enough compute, 48 iterations of random search takes the same time as 1. In this setup, random search actually takes *less* time than any sequential method. Of course, more compute isn\\u2019t free; however, we can dramatically reduce the compute cost for *experiments* using scaling laws. For example, the GPT-4 Technical Report [2] successfully extrapolated performance from models trained with 1/1,000th the compute. This is why our analysis is so well-suited for experiments: when tuning a large model for production, compute efficiency is very important; however, when exploring ideas in experiments, time efficiency and reproducibility are more important.\\n\\n[1] Lourie, N., Cho, K., & He, H. (2024). Show Your Work with Confidence: Confidence Bands for Tuning Curves. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. (pp. 3455\\u20133472). ACL.\\n\\n[2] OpenAI. (2023). GPT-4 Technical Report. arXiv. https://arxiv.org/abs/2303.08774\"}", "{\"comment\": \"After reading the other reviews, in particular reviewer TdXJ pointing out that random search never narrows the search-space, and therefore the asymptotic regime is unlikely to be the relevant one at any stage of the optimization, I'm adjusting my rating to \\\"reject\\\".\"}", "{\"summary\": \"In this paper, the probability function relationship between random search and model performance is analyzed theoretically, and the corresponding parameterized distribution is designed. The effect of parameter estimation of this distribution is better than that of nonparametric estimation. With this distribution, you can have a good understanding of the impact of parameter adjustment under the task. It helps researchers to evaluate the self-designed method and modify the corresponding strategy without eliminating the influence of parameter adjustment.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"In this paper, a new parameter distribution is proposed, which theoretically fits the model performance changes under random search parameters. The fitted curves can help researchers to do the next step, such as judging whether their model can solve the task based on the best predictions.\\n\\nThe three parameters in the new parameter distribution correspond to the actual parameter meanings, and the influence of the parameters can be roughly understood directly through the estimated distribution.\", \"weaknesses\": \"The task model in the experiment is slightly thin and not comprehensive. For example, in line 120, the authors mentioned \\\"Which architecture\\\", but in experiments, no choice of ResNet has been considered. The authors use ResNet18 directly.\\n\\nThe new parameter distribution proposed by the author is compared only with the non-parametric distribution, not with other simple parameter distributions, whether the simple parameter model is sufficient to approximate the ground truth.\\n\\nThe author claims to propose an asymptotic theory of random search, but in fact the author relies only on analysis rather than proof to provide an approximation, without any theoretical guarantee like asymptotic convergence or probably approximately correct (PAC) learning theory.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for highlighting the \\u201cclear predictive capabilities\\u201d of our theory and its \\u201cnovel description of the distribution of outcomes\\u201d from random search. While you note our theory is \\u201cconfirmed on three deep learning models\\u201d, you also raise two important concerns: are the theory\\u2019s assumptions satisfied more generally? And is it practically useful?\\n\\nFor practical utility, the goal of our work is not to propose a state-of-the-art hyperparameter tuning algorithm. Clever and effective algorithms for this purpose already exist. Instead, we seek better tools for the design and analysis of deep learning experiments. Here, there are far fewer options. During the exploratory stage, researchers need to iterate quickly. As a result, they typically evaluate a single batch of hyperparameter configurations in parallel. Often, the hyperparameters are not fully tuned until a later stage. Our theory provides the statistical foundation necessary to analyze these results and determine if it is worth tuning more\\u2014perhaps even with a better tuning algorithm. Section 4.3 demonstrates this use of our theory by extrapolating the tuning curve.\\n\\nBesides new statistical methods, the theory also provides a better understanding of random search&mdash;and its limitations. As you mention, model-based optimization can have significant advantages. Our theory clarifies the contexts in which these advantages are most important. It has long been known that random search\\u2019s effectiveness relates to the number of *important* hyperparameters [1]. Our theory quantifies this relationship and formalizes it as *the effective number of hyperparameters*, $\\\\gamma$. This parameter determines the shape of the noisy quadratic and how fast random search will progress. When $\\\\gamma$ is too high, random search will be very slow. The surprising fact is that often $\\\\gamma$ is low&mdash;in our experiments, it was always 1 or 2.\\n\\nFinally, you asked if our theory\\u2019s assumptions are satisfied more generally. We have added formal proofs in Appendix E, but ultimately this is an empirical question. In our experiments, we demonstrated our theory matches the outcomes from random search after the first 1-2 iterations. Those experiments examine 3 different models from both language and vision. Still, you brought up a valuable point: could the asymptotic theory fit because the models we consider are well understood? There are two ways in which this might happen: 1.) the architectures are particularly robust to their hyperparameters, or 2.) we know good search spaces for them.\\n\\nTo address both concerns, we have added Appendix D: Generalization Across Architectures. In it, we use the *same* hyperparameter search distribution as for ResNet18, but apply it to AlexNet [2] and ConvNext [3]. AlexNet is an older architecture and thus much less developed than ResNet. In contrast, ConvNext is newer and more advanced. Together, these architectures span a decade of research. By using the same search space, we guarantee that it is not unusually well-suited to each. In this setting, our theory still matches the outcomes of random search from the first 2-4 iterations. As you may have expected, the least advanced architecture, AlexNet, needs more iterations for the asymptotic regime to become applicable; however, 4 iterations is still well within the bounds of practical relevance. More importantly, as newer architectures will be even more advanced, our theory only becomes *more* relevant over time.\\n\\nWith these additions, our empirical results cover 5 architectures including convnets and transformers, from both vision and language, involving pretraining and finetuning, and spanning a decade of architectural improvements. Since we use the same search space across 3 models, it can not be tailored to each. In all these experiments, our theory describes random search after just a handful of iterations. We would be excited to see future work build off this foundation to analyze more advanced algorithms, like Bayesian optimization; still, our theory accurately describes a hyperparameter tuning method which is extremely common in practice.\\n\\n[1]: J. Bergstra, Y. Bengio. (2012). Random Search for Hyper-Parameter Optimization. Journal of Machine Learning Research. 13(10):281\\u2212305.\\n\\n[2]: A. Krizhevsky, I. Sutskever, G. E. Hinton. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, 25.\\n\\n[3]: Z. Liu, H. Mao, C. -Y. Wu, C. Feichtenhofer, T. Darrell and S. Xie. (2022). A ConvNet for the 2020s. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11966-11976.\"}", "{\"summary\": \"This paper proposes a simplified but accurate statistical model of hyperparameter tuning under random search in the \\\"asymptotic\\\" regime. Here the term \\\"asymptotic\\\" refers to hyperparameter settings which are \\\"close\\\" to optimal, for which a second-order taylor expansion around the set of optimal hyperparameters is informative. Fore this regime the authors heuristically propose the \\\"quadratic\\\" distribution to model the performance of random search in expectation over training randomness. To model the noisy effect from training randomness the authors propose a homoskedastic additive gaussian noise process which results in the \\\"noisy quadratic\\\" distribution. Over a variety of tasks the authors demonstrate the efficacy of this distribution for modeling random hyperparameter search.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors provide a clean and empirically compelling model of random hyperparameter search using a heuristic, first-principles based approach. The paper is written clearly, with a large amount of statistical and empirical validations.\", \"weaknesses\": \"It is quite unclear what the implications of these observations are. Also the framework only applies to random search as opposed to other randomized search methods such as Bayesian optimization. Currently the results seems like a few nice observations, but not a substantially impactful contribution.\", \"questions\": \"Is it possible to fit a model of H and perform a type of PCA procedure to determine the effective hyperparameters? Such an insight might help reduce the effective search space for certain classes of problems / hyperparameters. Are there any implications for other non-uniform random search methods such as Bayesian optimization? Or when doing muTransfer?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Note\", \"comment\": \"My apologies, I posted with the idea that the discussion period ended. However, it appears this year there exists a new design where the authors have one more day to reply. I will read your reply in detail and I will incorporate it in my judgement.\"}", "{\"comment\": \"The title of our work lays out its core contribution: \\u201cAn Asymptotic Theory of Random Search\\u201d. To deliver on this promise, we prove *a novel limit theorem*: the best scores from random search converge to the noisy quadratic distribution. This *new family of probability distributions* does not exist in the literature&mdash;we introduce it, derive its formulas, prove its properties, and implement its computational details. To enable widespread use, we make it *publicly available in our library*. Besides derivations and theoretical proof, we empirically validate our framework, demonstrating that the asymptotic distribution matches the ground truth across *five architectures spanning both vision and language*. We not only show our theory\\u2019s predictions match experiments, but verify its assumptions as well. Specifically, we show the noise from random seeds is normal and homoskedastic.\", \"a_number_of_reviewers_expressed_a_shared_misunderstanding\": \"we do *not* propose random search as an alternative to state-of-the-art hyperparameter tuning methods. We do not study how to find the best hyperparameters at all. Rather, we aim to develop *better tools for the design and analysis of deep learning experiments*. As a secondary contribution, we also seek a better understanding of random search&mdash;both its strengths and its limitations.\\n\\nSeveral reviewers point out that random search is not state-of-the-art. On this point, we completely agree: it is not state-of-the-art; however, it *is* practically significant. Along with grid search, random search remains one of the most common methods for hyperparameter tuning in typical deep learning experiments. For example, the Llama 3.1 report [1]\\u2014a substantial effort with over 200 core contributors and extensive experiments\\u2014mentions only two hyperparameter tuning methods: grid search (Section 7.5.2) and random search (Section 4.3.2). Thus, a theory of random search has considerable implications for deep learning practice. At the same time, random search has notable limitations; indeed, our theory clarifies them. It identifies the main determinants of random search\\u2019s performance (e.g., the effective number of hyperparameters), and *quantifies* exactly how they affect its progress. Sequential model based optimization can overcome these limitations; it offers a powerful tool and a fascinating area of research, and we hope future work builds off of our findings to analyze this more complex case.\\n\\nA final concern raised by several reviewers was practical impact. While understanding random search has its own merits, our primary aim was to improve deep learning practice. We hoped the theory would provide statistical tools for use in deep learning experiments; indeed, it has. In Section 4.3, we demonstrate how to extrapolate confidence bands for model performance as a function of tuning effort. These bands can be used, for example, to compare optimizers where robustness to hyperparameters is essential. Other applications include estimating confidence intervals for the best hyperparameters\\u2019 performance, or determining the effective number of hyperparameters.\\n\\nWe provide both theoretical and empirical evidence for our claims. For ease of presentation, we keep theoretical discussion informal in the main text. To ensure mathematical rigor, we have added Appendix E, containing formal proofs. Besides proving the limit, we test our assumptions and demonstrate convergence empirically. Though the theory is asymptotic, in our experiments it characterizes the behavior of random search after 1-4 iterations.\\n\\nAll reviewers agree that our experiments support the theory, though some expressed concern that fast convergence might not hold for other architectures. This could happen if convergence is due to the architectures we considered being particularly easy to tune, or if our search spaces were unusually well-suited for them. In response, we have used *the same search space* from our ResNet18 experiments with AlexNet (an old, less advanced architecture) and ConvNext (a new, more advanced one). These architectures span a decade of deep learning advancements, and because we use the same search space across them, it can not be tailored to each. This setting reconfirms our results: the theory characterizes random search in the first 2-4 iterations. So, though surprising, the asymptotic regime really does describe random search after just a few iterations.\\n\\nAs a tool for better experiments, as a framework for understanding random search, and as a foundation for future analyses, our theory offers many benefits for those who deal with hyperparameters in their research.\\n\\n[1]: Llama Team, AI @ Meta. The Llama 3 Herd of Models. (2024). https://arxiv.org/abs/2407.21783\"}", "{\"comment\": \"You also brought up an interesting question: \\u201cIs it possible to fit a model of H and perform a type of PCA procedure to determine the effective hyperparameters?\\u201d. We had this same thought and explored this direction. One challenge is that you have to fit the Hessian, which has $O(d^2)$ parameters. For example, if there are 8 hyperparameters then the Hessian has 36 parameters. Combined with the noise due to random seeds and the need to only use points near the optimum, applying PCA to the Hessian can be difficult. Actually, this is one reason why our theory is useful: by taking a marginal approach, you can identify the asymptotic regime and the effective number of hyperparameters without having to fit the Hessian. Regardless of how many hyperparameters you consider, the noisy quadratic distribution has only 4 parameters to fit.\"}", "{\"comment\": \"Before finalizing your decision, please note the rebuttal period has not yet ended and allow us the opportunity to provide our response.\"}", "{\"summary\": \"The authors propose to parametrize the performance of random search (tuning curve) with a noisy quadratic distribution. The authors test the fit and extrapolation of the proposed work in three experimental settings with diverse deep learning models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"-\", \"weaknesses\": [\"No baselines are considered.\", \"Limited number of experiments conducted. To really validate the claims of the paper one must consider diverse search spaces and models.\", \"The code for the work is not provided.\", \"The related work section is outdated.\"], \"questions\": \"- **Line 180, \\\"Thus as the search continues, the region of relevant hyperparameters converges about the optimum\\\"**\\n\\n I do not agree with the above statement, random search, as the name gives, samples hyperparameters randomly. It is not a model-based method that incorporates the results into it's sampling stategy. So the region of relevant hyperparameters is the same search space (except maybe what was sampled before), it is not constrained in any manner. Additionally, being close to the optimum, would require a very very large number of trials in a continuous search space of $D$ dimensions.\\n\\n- **The work defines the asymptotic regime as the hyperparameters that we care about the most, those close to the optimum (Line 81).** \\n\\n Looking at Figures 3 and 4, this perspective does not correspond to the explanation provided by the authors. For example, at the bottom of Figure 4 (the ResNet model), the region pointed out as the asymptotic regime in my perspective, would be somewhere at iteration 8-10. Random search there seems to be close to finding an optimum solution. While the asymptotic regime pointed out by the authors is around iteration 1-2.\\n\\n- At the bottom of Figure 3 and Figure 4, did the authors order the random search trials by performance? Because the performance over the iterations seems to follow a power law. Given that it is random search, I would expect some flat regions given by hyperparameter configurations that are not optimal. Based on the figures it seems that the performance is constantly improving which is very surprising. The curve looks like a curve that is generated from training a model.\\n\\n- Throughout the manuscript, the authors mention that they use 1024 iterations for each considered model/search space combination, however, on the plots the number of iterations is up to 100 for Figure 3 and up to 70 for Figure 4. Do the authors consider the number of repetitions too, how exactly is the number 1024 devised? How exactly are the 48 subsamples collected, part of the beginning of the \\\"tuning curve\\\" or randomly from the full data?\\n\\n- **Line 502, \\\"however, random search remains a strong baseline, with variants near state-of-the-art (Li et al.,2018;2020).\\\"**\\n\\n The authors do not accurately reflect the current state of the domain. HyperBand and ASHA are not the state-of-the-art in multi-fidelity hyperparameter optimization. There have been several advancements that combined the schedule of HyperBand with model-based surrogates[1][2] and more recently, the current state-of-the-art [3][4] approaches that use an adaptive schedule with model-based algorithms. I would urge the authors to incorporate the provided citations into the manuscript to provide accurate information about the current state-of-the-art in multi-fidelity optimization.\\n\\n- While the authors advocate the use of their proposed work with deep learning, the deep learning tasks are expensive and achieving $x$ trials is computationally demanding. In these scenarios, practitioners tend to use multi-fidelity based methods that are model-based. Random search is not a very promising algorithm.\\n\\n- How many data points (HPO trials) are needed for the provided distribution to accurately reflect the tuning curve?\\n\\n[1] Falkner, S., Klein, A., & Hutter, F. (2018, July). BOHB: Robust and efficient hyperparameter optimization at scale. In International conference on machine learning (pp. 1437-1446). PMLR.\\n\\n[2] Awad, N., Mallik, N., & Hutter, F. DEHB: Evolutionary Hyberband for Scalable, Robust and Efficient Hyperparameter Optimization.\\n\\n[3] Wistuba, M., Kadra, A., & Grabocka, J. (2022). Supervising the multi-fidelity race of hyperparameter configurations. Advances in Neural Information Processing Systems, 35, 13470-13484.\\n\\n[4] Kadra, A., Janowski, M., Wistuba, M., & Grabocka, J. (2024). Scaling laws for hyperparameter optimization. Advances in Neural Information Processing Systems, 36.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for observing the practical utility of our tool and how it can help researchers in deciding \\u201cthe next step, such as judging whether their model can solve the task\\u201d. As you remark, our theory enables researchers to estimate meaningful parameters that tell them about their problem, such as the best possible performance. It is exactly this use case&mdash;better tools for deep learning research&mdash;that is the goal of our work.\\n\\nYou also raise several concerns, which we have addressed in the updated version of the paper.\\n\\nFirst, you mention we could broaden our experiments, in particular we could explore architectural variations. In fact, we do explore an architectural variant of ResNet in our original experiments: whether to use blurpool or maxpool layers (see the search distribution described in Appendix C). Still, we could explore major architectural variations as well as minor ones. Thus, we have included new results that analyze AlexNet [1] and ConvNext [2] using the same hyperparameter search distribution and task as for ResNet. Appendix D presents these results. Notably, our theory describes the outcomes from each of these architectures well, with the asymptotic approximation matching the data after 2-4 iterations of random search.\\n\\nFor comparing to simpler parametric distributions, we prefer the distribution derived from our limit theorem for two reasons. The limit theorem gives us confidence that the distributional form is correct and thus should extrapolate. More importantly and as you point out, our distribution\\u2019s parameters have meaningful interpretations in terms of the original problem; therefore, their estimates are more informative to the researcher.\\n\\nFinally, you mention that while we propose a theory, our derivation was informal. On your suggestion, we have updated the paper with formal theorems and proofs (Appendix E). We kept the derivation in the main text informal for ease of presentation; however, the proofs in the appendix provide all the technical details and specify exactly the sense in which the score distribution converges to the noisy quadratic.\\n\\n[1]: A. Krizhevsky, I. Sutskever, G. E. Hinton. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, 25.\\n\\n[2]: Z. Liu, H. Mao, C. -Y. Wu, C. Feichtenhofer, T. Darrell and S. Xie. (2022). A ConvNet for the 2020s. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11966-11976.\"}", "{\"comment\": \"As discussed in our response to Reviewer TdXJ, their comment about lack of convergence is based on a misunderstanding of what was meant by \\u201cthe region of relevant hyperparameters\\u201d. The sentence before the line they quote clarifies this: the region of relevant hyperparameters consists of \\u201cthe ones better than the best you have seen so far\\u201d. Thus, it is not a function of the tuning algorithm, but rather how far tuning has progressed. This region converges to the optimum under general conditions: we have added a proof of this (Proposition E.1) and a formal proof of our limit theorem in Appendix E.\\n\\nEarlier, you also asked a related question: why are the tuning curves so smooth? As you point out, random search progresses in jumps, therefore a single run will be noisy with large flat regions. Instead of a single run, we estimate the *median* of all such runs over the search distribution. That is why the tuning curves in Figure 1 and 3 are smooth&mdash;the *probability* of finding a good configuration increases with each iteration.\"}", "{\"title\": \"Rebuttal Reply\", \"comment\": \"I have read all the other reviews and noticed that similar concerns to mine have been shared. Based on which I will keep my score and recommend for rejection.\"}" ] }
BzVJOqwBka
Prompt-Guided Distillation from Multimodal Large Language Models to Task-specific Models for Multimodal Sentiment Analysis
[ "Haoyu Zhang", "Xiaoying Tang", "Wei Liu", "Jian Luan", "Tianshu Yu" ]
Multimodal Sentiment Analysis (MSA) has made some progress with the advent of Multimodal Large Language Models (MLLMs). However, the scalability and the closed-source nature of some MLLMs imposes challenges for efficient application in the real-word. In this study, we explore an innovative pathway to infuse the capabilities of general MLLMs into task-specific small models for MSA. We introduce the Prompt-Guided Multimodal Framework (PGMF), a refined teacher-student framework designed to transfer knowledge from powerful, general MLLMs to smaller, efficient models. The PGMF-Teacher utilizes MLLM-generated prompts and a tailored conditional alignment module to achieve better MSA, while the PGMF-Student distills this expertise to predict independently of MLLMs' guidance. Extensive evaluations on two popular MSA datasets including SIMS and MOSI demonstrate that compared to previous task-specific small models, PGMF-Teacher achieves state-of-the-art performance with the help of MLLMs' prompts, while PGMF-Student achieve competitive results with fewer parameters and without relying on MLLMs' prompts. The proposed framework offers a novel way to equip task-specific small models with the capability of MLLMs.
[ "Multimodal Sentimen Analysis", "Representation Learning", "Multimodal Large Language Model", "Knowledge Distillation" ]
Reject
https://openreview.net/pdf?id=BzVJOqwBka
https://openreview.net/forum?id=BzVJOqwBka
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yek7Ymdhtg", "vp6u3LuSRA", "shgAczdjrs", "q1tKEaUoYi", "o4BnI4gDD0", "jrEqzyAyBL", "ivUJxMw80r", "Yy0n5RnCFP", "WT6INt3S4j", "VhdpKgCggI", "UYX4PnwcbL", "TLVdonYwCC", "O7AytXhzHH", "NL9j6w2uST", "Kea4cDbT6Y", "ICraSAsomV", "BVd6zxbt10", "582faIkyXL", "2olSFh09NO" ], "note_type": [ "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730966579704, 1731509274188, 1732536447317, 1737523393105, 1731509506993, 1730700342955, 1732003679361, 1731508880730, 1735023179078, 1732715289398, 1730823044136, 1731511204082, 1731510926309, 1731510660204, 1732716330379, 1732715108195, 1732541214383, 1732376864256, 1732717114713 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission379/Reviewer_9bUK" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Reviewer_9bUK" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Reviewer_wMDu" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Area_Chair_jw5b" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Reviewer_PNKv" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Reviewer_PNKv" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ], [ "ICLR.cc/2025/Conference/Submission379/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This study proposes a Prompt-Guided Multimodal Framework (PGMF) to transfer the capabilities of large Multimodal Large Language Models (MLLMs) to smaller, task-specific models for Multimodal Sentiment Analysis (MSA). PGMF consists of a teacher model (PGMF-Teacher) and a student model (PGMF-Student). The teacher uses MLLM-generated prompts to achieve better alignment and sentiment analysis, while the student learns to predict independently. Experiments show that PGMF-Teacher achieves state-of-the-art performance, while PGMF-Student achieves competitive results with fewer parameters, providing an efficient way to enhance small models with MLLM capabilities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Leveraging Multimodal Large Language Models (MLLMs) to address the current challenges in the field of multimodal sentiment analysis represents a promising and worthwhile direction for exploration.\\n2. Embedding the teacher-student model paradigm within this domain is also a well-considered and potentially impactful approach.\", \"weaknesses\": \"1. It appears that your CONDITIONAL ALIGNMENT primarily facilitates attention-based interaction between the GPT-generated prompts and the corresponding content. However, I am unclear about the specific significance of taking the dot product of these two attention maps. From my perspective, your alignment module seems to merely apply attention mechanisms followed by a dot product, which does not appear to introduce any substantive algorithmic novelty. Could you elaborate further on the theoretical or empirical contributions this approach provides beyond the existing methods?\\n\\n2. Your multimodal fusion module appears to simply concatenate features from different modalities and feed them into a transformer encoder. This approach is quite common and widely adopted in existing literature.\\n\\n3. In a word, it seems that the paper primarily applies the teacher-student model paradigm to the domain of multimodal sentiment analysis (MSA), incorporating GPT-generated content as prompts. While the motivation is sound, the implementation appears somewhat simplistic, lacking sufficient innovation to substantiate a significant contribution.\\n\\n4. The selection of baselines in your comparison is quite limited, and notably, none of the baselines are from 2024. Given that this field remains highly active and rapidly evolving, I strongly recommend including more recent baselines from 2024 to provide a more comprehensive and current evaluation of your proposed approach.\\n\\n5. The analysis presented in the \\\"EFFECT OF EACH COMPONENT\\\" section appears rather superficial and lacks depth, raising the concern that it may have been generated by AI without sufficient refinement or critical examination.\\n\\n6. Since the goal is to train a student model with reduced complexity, it would be highly informative to include a comparison of parameter counts with other baselines. Such a comparison would help substantiate claims regarding the efficiency and compactness of the student model relative to existing approaches.\", \"questions\": \"1. It appears that your CONDITIONAL ALIGNMENT primarily facilitates attention-based interaction between the GPT-generated prompts and the corresponding content. However, I am unclear about the specific significance of taking the dot product of these two attention maps. From my perspective, your alignment module seems to merely apply attention mechanisms followed by a dot product, which does not appear to introduce any substantive algorithmic novelty. Could you elaborate further on the theoretical or empirical contributions this approach provides beyond the existing methods?\\n\\n2. Your multimodal fusion module appears to simply concatenate features from different modalities and feed them into a transformer encoder. This approach is quite common and widely adopted in existing literature.\\n\\n3. In a word, it seems that the paper primarily applies the teacher-student model paradigm to the domain of multimodal sentiment analysis (MSA), incorporating GPT-generated content as prompts. While the motivation is sound, the implementation appears somewhat simplistic, lacking sufficient innovation to substantiate a significant contribution.\\n\\n4. The selection of baselines in your comparison is quite limited, and notably, none of the baselines are from 2024. Given that this field remains highly active and rapidly evolving, I strongly recommend including more recent baselines from 2024 to provide a more comprehensive and current evaluation of your proposed approach.\\n\\n5. The analysis presented in the \\\"EFFECT OF EACH COMPONENT\\\" section appears rather superficial and lacks depth, raising the concern that it may have been generated by AI without sufficient refinement or critical examination.\\n\\n6. Since the goal is to train a student model with reduced complexity, it would be highly informative to include a comparison of parameter counts with other baselines. Such a comparison would help substantiate claims regarding the efficiency and compactness of the student model relative to existing approaches.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 9bUK\", \"comment\": \"## Response to Q1/W1\\n\\n**As shown in Figure 2 of the paper**, we first use the MLLMs' prompt as a query to extract essential information from other modalities, obtaining a shifted attention map \\u25b3. This shifted attention map \\u25b3 is then applied to the original attention map by the **dot product (it can be seen as an attention map fusion/transfer)**, effectively adjusting and optimizing the alignment process with the help of MLLMs. To verify the effectiveness of our idea, we choose the straightforward (dot product) to design the conditional alignment module. By employing a simple pipline, we focused on demonstrating that the MLLMs, even with basic structures/operations, could provide guidance that helps improve the task-specific small-scale model\\u2019s representation learning and overall performance. Specifically, the responses to contributions are as follows:\\n\\n1. **As mentioned in General Response, the dot product within the conditional alignment is not the innovation point of the PGMF. Our focus is to validate the effectiveness of the framework through the straightforward design.** Instead, the core contribution lies in providing a novel way for MLLMs to directly participate in regulating the alignment process, helping the task-specific small-scale model focus on more relevant cross-modal relationships. This innovation combines MLLMs' prompts with cross-modal alignment, enabling more efficient completion of alignment tasks and improving performance in multimodal sentiment analysis. Although some existing methods [1,2] use MLLMs/LLMs to assist task-specific small-scale model training, they typically rely on MLLMs/LLMs to generate high-quality data. In contrast, our approach differs fundamentally in principle.\\n2. **As shown in Figure 3 of the paper**, we demonstrates the effect of the conditional alignment module by showing the difference between vision-language attention maps with and without the MLLMs' help. The difference map clearly indicates that the MLLMs can help the model focus more precisely on key regions in the language and visual modalities, demonstrating the effectiveness of this core idea. Additionally, experiments on the SIMS and MOSI datasets (**Table 1 and Table 2 of the paper, and Table 7 of the appendix**) show that the prompt-guided alignment module enables both the PGMF-Teacher and PGMF-Student to achieve state-of-the-art performance across most metrics. These results further confirm the empirical contribution of our framework.\\n\\n## Response to Q2/W2 \\n\\n**As mentioned in General Response**, our primary focus is not on the fusion method itself, but rather on how MLLMs are leveraged to help the learning of task-specific small-scale models. Therefore, **our innovation lies in the whole pipeline of the proposed framework PGMF.** Therefore, we opted for a simple way in the fusion module, using concatenation and a Transformer encoder, to emphasize the core impact of MLLMs in improving the task-specific small-scale model\\u2019s performance. **By achieving performance improvements** **without complex module design****, we believe the model's improvements in** **MSA** **are** **mainly derived from the framework**. We hope this clarification helps to convey the main contribution to our work.\\n\\n## Response to Q3/W3\\n\\n**As mentioned in General Response**, **our core idea centers on the whole pipline** that using MLLMs to help the learning of task-specific small-scale models. **We intentionally chose the simplest method to validate our idea, which we believe can better demonstrate the effectiveness of our method.** By keeping the implementation straightforward, we can clearly show that the performance gains are due to the prompt-guided alignment **rather than any additional complex module designs**. \\n\\nIn addition, Reviewer wMDu confirmed that our method has not been seen in other papers, it is original. Reviewer PNKv recognized that our method is with solid contribution and could be utilized for more tasks by the research community. These comments also demonstrate the novelty of our PGMF. **In a word, we believe that simple and effective methods are also valuable, simplicity is not an indication of lack of innovation. There are many works [3,4,5] that has simple architecture but make a great contribution to the community.**\"}", "{\"comment\": \"Thank you for your detailed response and the additional experiments provided. I have carefully reviewed your rebuttal, and I appreciate the effort and thoughtfulness that went into addressing the initial comments.\\n\\nYour approach to leveraging LLM to train a lightweight model for multimodal sentiment analysis is indeed meaningful and has potential. However, considering the current version of the work, I believe it aligns more with a score of 5 at best. ICLR places significant emphasis on algorithmic innovation, and I feel that aspect could be further strengthened in your submission.\\n\\nThank you again for your efforts and engagement.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer 9bUK\", \"comment\": \"## Response to Q4/W4\\n\\n**In our initial submission, the most recent baseline with best results was ALMT.** It was published in December 2023 at EMNLP, which was **one of the best results with open-source code available** at the time. However, with recent releases from conferences like EMNLP 2024, we now have additional methods for comparison. Specifically, we have included the latest baselines for comparison, including KuDA [6] and FISFN [7]. **As shown in the table below**, we can see that PGMF-Teacher/Student achieves SOTA performance in all metrics on the SIMS dataset, demonstrating the effectiveness of our idea and framework. On MOSI datasets, the PGMF-Teacher/Student also can achieve good performance, especially in Acc-2 and F1. **In addition, it should be noticed that these recent methods are not open-sourced. So we are unable to conduct multiple runs to report the mean and standard deviation for a comprehensive comparison.** \\n\\n| SIMS | | | | |\\n| ------------ | ------------------------- | ------------------------- | -------------- | -------------- |\\n| Method | Acc-2 | F1 | MAE | Corr |\\n| KuDA | 80.74 | 80.71 | 0.408 | 0.613 |\\n| FISFN | 80.50 | 80.7 | 0.397 | 0.619 |\\n| PGMF-Teacher | **83.06\\u00b10.95** | **84.06\\u00b10.43** | **0.370\\u00b10.50** | **0.690\\u00b10.80** |\\n| PGMF-Student | 81.40\\u00b11.58 | 81.85\\u00b11.41 | 0.382\\u00b11.39 | 0.662\\u00b11.26 |\\n| **MOSI** | | | | |\\n| KuDA | 84.40/86.43 | 84.48/86.46 | **0.705** | 0.795 |\\n| FISFN | 85.0/86.0 | 85.0/86.0 | 0.707 | **0.801** |\\n| PGMF-Teacher | **85.05\\u00b10.66/86.61\\u00b10.69** | **85.15\\u00b10.66/86.69\\u00b10.69** | 0.734\\u00b11.46 | 0.797\\u00b10.60 |\\n| PGMF-Student | 83.62\\u00b10.91/85.37\\u00b11.00 | 83.68\\u00b10.96/85.50\\u00b10.96 | 0.746\\u00b11.63 | 0.775\\u00b11.10 |\\n\\n## Response to Q5/W5\\n\\n**Thank you very much for your feedback on the \\\"EFFECT OF EACH COMPONENT\\\" section!** We would like to clarify that this paragraph was not generated by AI. In our submission, we condensed this section to meet length requirements, which may have inadvertently impacted the depth of the analysis. To address your concern, we have added detailed discussion in this section to provide a more critical analysis. In addition, we will reduce less essential content, such as removing single-run results and retaining only the multi-run averages and standard deviations for comparison, as suggested by Reviewer PNKv. The revised paragraph is as follows:\\n\\n> To evaluate the impact of each component within the framework, we conducted experiments by removing specific components. First, when we removed the MLLMs' prompt from the PGMF-Teacher, we observed a significant drop in performance across both datasets. Specifically, on the SIMS dataset, the F1 score decreased from 84.06% to 80.84%, and MAE increased from 0.370 to 0.436. A similar trend was observed on the MOSI dataset, where the F1 score dropped from 85.15% to 79.60%, and MAE increased from 0.734 to 0.914. These phenomenoa show that the MLLMs plays a crucial role in helping the model capture relevant multimodal information more effectively. Scond, we removed the guidance of the PGMF-Teacher during the training of the PGMF-Student. This led to a noticeable decrease in the student model's performance, with the F1 score on SIMS dropping from 81.85% to 78.72%, and on MOSI from 83.68% to 83.00%. The increase in MAE values on both datasets also reflects the PGMF-Student model's reduced ability to align multimodal information without teacher guidance. This result shows that the importance of knowledge distillation, as the PGMF-Teacher's guidance can help the PGMF-Student learn the relationship between each modality effectively. \\n>\\n> In addition, we also observed that the guidance from the PGMF-Teacher had a greater impact on the student model\\u2019s performance on the SIMS dataset compared to the MOSI dataset. We believe that this difference may be because of the diversity of data in the SIMS dataset. Specifically, the data of SIMS dataset contains complex environments and disturbances such as lighting, head pose and audio background noise. This makes the data difficult for the PGMF-Student to achieve better performance without relying on the guidance of the PGMF-Teacher.\"}", "{\"summary\": \"The paper propose a novel framework that integrates the generalized knowledge of MLLMs to guide smaller, task-specific models for better MSA.\\n\\ufeff\\nSpecifically, visual and audio features are aligned with language features via two alignment modules: Visual-to-Language Alignment and Audio-to-Language Alignment. These conditional alignment layers establish correspondences between modalities with the help of the prompt, facilitating effective multimodal fusion with the help of MLLMs. \\n\\ufeff\\nBoth PGMF-Teacher and PGMF-Student can achieve good performance on two popular datasets (i.e., SIMS and MOSI), especially for PGMF-Student which can achieve improved performance without relying on prompt from MLLMs while maintaining fewer parameters. \\n\\ufeff\\nThis approach also offers a novel way to empower task-specific small models with the capabilities of MLLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"originality: The specific method of this paper has not been seen in other papers, so I believe it is original;\", \"weaknesses\": \"It is currently a common practice to distill knowledge from large models to small models and achieve improvement. This paper uses a multimodal large model to identify the clues that play a decisive role in predicting emotional labels in each modality, and then integrates them into the small model training framework for improvement. The idea is relatively straightforward, and the innovation is not particularly prominent.\", \"questions\": \"none\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"no concern\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Updated Response and Rebuttal Revision for Manuscript\", \"comment\": \"Hi, Reviewer PNKv. We have updated the responses, **including results on the MOSEI dataset and some revisions to the paper.** The details can be found in the PDF of rebuttal revision. **If you have any further questions, please do not hesitate to discuss them with us.** Thanks for your suggestion.\"}", "{\"title\": \"General Response\", \"comment\": \"# General Response\\n\\nWe appreciate the valuable time and effort from all reviewers, as well as the constructive comments and suggestions that contributed significantly to the improvement of our paper. **We are eager to engage in further discussions with you to address your concerns.**\\n\\n**First, we would like to restate our motivation:**\\n\\nWe found that there are two limitations to apply MLLMs to MSA. 1) Although MLLMs have shown some improvement in MSA, their performance gains are often marginal and come at a high computational cost. 2) **Unlike the common practice in other fields** [1,2], using MLLMs to generate high-quality data for training small models is challenging for MSA due to the complexity of generating video, audio, and text data together. **These limitations** **motivate** **us to explore a different** **and more efficient** **direction: leveraging the guidance** **of** **MLLMs to help in training task-specific small-scale models for better MSA.** By involving MLLMs' prompt as guidance during the alignment and distillation process, we designed the PGMF that make the task-specific small-scale model to benefit from the MLLMs' knowledge. This presents a novel efficient framework.\\n\\n**Second, we would like to restate the novelty is the framework rather than the model's architecture:**\\n\\n1. At the framework level, our pipeline **for the first time** utilizes prompt outputs from MLLMs as conditional guidance within a teacher-student framework, effectively improving the alignment and learning process of the student model. The experimental results validate the effectiveness of this framework, demonstrating that it enables the student model to achieve state-of-the-art performance with reduced complexity. \\n2. **We intentionally chose the simple** **and straightforward** **architecture modules** **to validate our framework**,which we believe can better demonstrate the effectiveness of our framework. By keeping the implementation straightforward, we can clearly show that the performance gains are due to the prompt-guided alignment **rather than any additional complex module designs**. \\n3. **We believe that simplicity and effectiveness are valuable for the community.** There are many works [3,4,5] that have simple architecture but make a great contribution to the community. For example, in the MSA field, Self-MM [5] achieves significant performance despite using only MLPs in its network architecture. Similarly, our framework PGMF, although intentionally designed to be simple, offers a new perspective on using MLLMs to guide smaller models in multimodal tasks, which we believe can inspire further exploration and refinement in this area.\\n\\nThank you for your patience and suggestions. **We look forward to discussing our work with you.**\\n\\nSincerely,\\n\\nThe Authors\\n\\n\\n# Reference Used throughout the Rebuttal\\n\\n[1] Chen, L. et al., 2023. Sharegpt4v: Improving Large Multi-modal Models with Better Captions. *arXiv* *preprint**arXiv:2311.12793*.\\n\\n[2] Chen, L. et al., 2024. Sharegpt4video: Improving Video Understanding and Generation with Better Captions. *arXiv* *preprint* *arXiv:2406.04325*.\\n\\n[3] Zadeh A. et al., 2017. Tensor Fusion Network for Multimodal Sentiment Analysis. In EMNLP 2017.\\n\\n[4] Yao-Hung Hubert Tsaiet al., 2019. Multimodal Transformer for Unaligned Multimodal Language Sequences. In ACL 2019.\\n\\n[5] Wenmeng Yu et al., 2021. Learning Modality-specific Representations with Self-supervised Multi-task Learning for Multimodal Sentiment Analysis. In AAAI 2021.\\n\\n[6] Feng, X. et al., 2024. Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis. In EMNLP 2024.\\n\\n[7] Jin, Y., 2024. GSIFN: A Graph-Structured and Interlaced-Masked Multimodal Transformer Based Fusion Network for Multimodal Sentiment Analysis. *arXiv* *preprint* *arXiv:2408.14809*.\"}", "{\"metareview\": \"The paper proposes a distillation approach for Multimodal Sentiment Analysis, where knowledge is extracted from Multimodal LLMs and used to bias the attention maps of a smaller task-specific model.\\n\\nTwo reviewers vote to borderline acceptance and one reviewer gave borderline rejection while pointing out some limitations such as limited originalty and lack of innovation.\\n\\nEven if this paper in on the borderline, considering the ICLR quality, the contributions seem not to be sufficient for ICLR presentation.\\n\\nSo, AC recommends rejecting this paper.\", \"additional_comments_on_reviewer_discussion\": \"Initial scores were 6, 5, and 3.\\n\\nMain concerns raised by two reviewers were lack of experimental results, paper organization, unclear method description and analisys, lack of innovation in methodology. \\n\\nDuring the rebuttal, the scores became 6, 6, and 5. \\nVia AC-reviewer discussion, 9bUK argued that the contribution of this paper is not sufficient for ICLR quality with score 5.\"}", "{\"title\": \"Follow-Up on Rebuttal of Reviewer wMDu\", \"comment\": \"Dear Reviewer wMDu,\\n\\nWe hope you are doing well. Thank you very much for your positive comments on our paper! We are writing to follow up on the response to our submission. \\n\\nWe have submitted a revised version and responded to the reviewers' comments. **We would appreciate your kindly seeing our response and appreciate it if you would consider re-evaluating our paper.**\\n\\nIf there are any remaining questions or if further clarification would be helpful, please do not hesitate to let us know.\\n\\n**We are looking forward to your response.**\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"The paper presents a distillation framework for Multimodal Sentiment Analysis, where knowledge is extracted from Multimodal LLMs and used to bias the attention maps of a smaller task-specific model. This model is then used as a teacher for a task-specific student model, which does not require the LLM-generated prompts during inference. The method achieves competitive performance on CMU-MOSI and state-of-the-art performance on CH-SIMS.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The presented method of prompt-guided attention transfer from MLLMs is a solid contribution and could be utilized for more tasks by the research community.\", \"Ablations regarding the effect of prompt-guidance, the effect of loss weight hyperparameters, and extension of the method to different architectures are included (though some are delegated to the appendix).\", \"Multiple-run averages, along with standard deviations are included.\"], \"weaknesses\": [\"The evaluation could include the CMU-MOSEI dataset, which is larger and more recent than CMU-MOSI.\", \"The \\\"Searched best seed\\\" related rows in Tables 1 and 2 should be removed. Optimized seed results are rather uninformative, since the seed is not a tunable hyperparameter.\", \"I do not support the choice of delegating the \\\"Related work\\\" section to the Appendix, since relation to prior work should be a key component of a research paper. Reducing the size of figures 1, 2, removing the optimized seed results, and performing a general revision of the paper should create space in the main part of the paper.\", \"In general Sections 2.4, 2.5 are a bit verbose and the equations are rather uninformative. Equation 11, especially so. What exactly is the fusion operation? I think these sections could benefit from a revision both in terms of concreteness and clarity and in terms of length.\", \"The attention scores in Figure 3 range from -0.0004 to 0.0006 (very close to 0 and a difference from high to low score in the 4th decimal point). I think this is very concerning for the soundness of the method. What does the model actually attend to? Could this be due to the choice of the Hadamard product operation for fusing the attention matrices, which makes the scores extremely sparse / close to zero?\", \"The authors have addressed most of these weaknesses in the rebuttal.\"], \"questions\": \"My questions are included in the \\\"Weaknesses\\\" section of the review\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer PNKv\", \"comment\": \"## **Response to W1**\\n**Thank you for suggesting the inclusion of the CMU-MOSEI dataset for evaluation.** Due to the **financial cost** of using chatGPT API to help in training on such a large dataset, we faced some **economic constraints** during the initial experiments, which made it challenging to conduct evaluations on MOSEI. In response to your feedback, we have conducted experiments on the MOSEI dataset. **As shown in the table below (Table 3 of the paper),** the results on the larger dataset (MOSEI) show that PGMF-Teacher/-Student achieves advanced performance on most of the metrics with few parameters. This demonstrates that PGMF has good generalization ability on data sets of different sizes. It is worth noting that Self-MM with the fewest parameters shows well performance on the MOSEI dataset. This also demonstrates that the feasibility of suitable strategies to achieve strong performance with smaller parameters.\\n\\n| MOSEI|||||||\\n| ------------ | --------- | ------------------------------ | ------------------------- | ------------------------- | -------------- | -------------- |\\n| Method | Parameter | Transformer-based Architecture | Acc-2| F1| MAE | Corr|\\n| Video-LLaMA2 | 7B | \\u2714\\ufe0f | 83.29/84.50| 83.23/85.21 | 0.922| 0.406 |\\n| GPT-4o-mini | - | \\u2714\\ufe0f | **85.04/86.90** | **85.25/87.04**| **1.015**| **0.744**|\\n| TFN | 5.04M | | 83.00\\u00b10.45/82.90\\u00b10.43| 82.68\\u00b10.40/82.83\\u00b10.41| 0.566\\u00b10.31| 0.725\\u00b10.21|\\n| MISA | 1.14M | \\u2714\\ufe0f | 84.41\\u00b10.30/85.09\\u00b10.62| 84.16\\u00b10.30/85.02\\u00b10.59 | 0.553\\u00b10.46 | 0.759\\u00b10.25|\\n| Self-MM | 0.16M | | 84.15\\u00b10.50/84.90\\u00b10.49| 84.15\\u00b10.43/84.79\\u00b10.40 | **0.529\\u00b10.47** | 0.764\\u00b10.45|\\n| TETFN | 1.25M | \\u2714\\ufe0f | 84.18\\u00b10.62/85.42\\u00b10.43 | 84.06\\u00b10.63/85.31\\u00b10.55| 0.543\\u00b10.51| 0.769\\u00b10.27|\\n| ALMT | 3.21M | \\u2714\\ufe0f | 84.35\\u00b10.34/84.76\\u00b10.45| 84.10\\u00b10.32/84.25\\u00b10.59| 0.542\\u00b10.45| 0.768\\u00b10.17|\\n| PGMF-Teacher | 1.47M | \\u2714\\ufe0f | **85.08\\u00b10.36/86.62\\u00b10.75** | **85.55\\u00b10.24/86.71\\u00b10.71** | 0.539\\u00b11.06| **0.773\\u00b11.51** |\\n| PGMF-Student | 0.48M | \\u2714\\ufe0f | 83.96\\u00b10.38/84.67\\u00b10.27| 84.20\\u00b10.48/84.74\\u00b10.28| 0.548\\u00b10.41| 0.747\\u00b10.51|\\n\\n## **Response to W2, W3 and W4**\\nThanks for your suggestion. We have made some effort to move the relevant work (Section 2) to the main body and ensure clarity and conciseness of the method presentation. The main revisions are: **1)** Remove the \\\"Searched best seed\\\" related rows and only report the average results of five runs. **2)** The size of Figures 1, 2, and 3 are reduced to make room for the related work. **3)** Remove some equations in Sections 3.4 and 3.5 to ensure clarity and conciseness. **4)** Explain explain the fusion operation in detail in Section 3.5. **The revised content can be found in the PDF of the revision rebuttal.**\\n\\n## **Response to W5**\\n\\n1. Figure 3 is an **attention difference map (not attention map)**, obtained by subtracting the attention map without MLLM guidance from that of the PGMF-Student. The values represent the difference in attention scores, with positive differences indicating areas where the PGMF-Student pay more attention to.\\n2. In the original attention map, each row\\u2019s attention scores sum to 1 after the softmax operation. However, due to the relatively long sequence length (e.g., 55 frames for video sequences in the SIMS dataset), the attention scores become more distributed across frames, resulting in a sparsity and low attention scores. Despite the low individual values, we can see from the difference map that these subtle changes in scores effectively shift the model\\u2019s focus across large regions, showing the prompt\\u2019s impact on attention distribution.\\n3. In long-sequence video data, we think changes between adjacent frames are continuous and gradual, making large attention score changes less possible. However, small changes in attention scores across many regions are sufficient to shift the model\\u2019s focus and significantly impact the multimodal alignment.\\n4. Our pipeline is intentionally designed to be simple, without any complex mechanisms in the alignment and fusion modules. This choice ensures that the improvements in performance are due to the framework itself, rather than relying on intricate structures or tricks. We believe this simplicity also demonstrates the effectiveness of our proposed framework.\"}", "{\"title\": \"Response to Reviewer wMDu\", \"comment\": \"## **Response to Weakness**\\n\\nThank you very much for your feedback for our work! We do appreciate your recognition of the originality of our work. Indeed, we are the first to introduce this specific framework that leverages MLLMs to help the training of the task-specific small-scale models for better MSA.\\n\\n**About the weakness, our choice of a simple and straightforward model structure was intentional.** By keeping the design clear and direct, we aimed to demonstrate the core effectiveness of our idea and framework itself. This can ensure that the improvements we observe are due to the framework **rather than any particular trick or complex structural design**. We believe that a simple yet effective method can also offer valuable contributions to the community, as has been shown by many solid works [3,4,5]. Our goal is to do a solid work which we hope can inspire further research and practical applications in MSA.\"}", "{\"title\": \"Response to Reviewer 9bUK\", \"comment\": \"## Response to Q6/W6\\n\\nThank you for highlighting the importance of comparing parameter counts with other baselines. **In Table 7 of the appendix**, we included a comparison with ALMT (the best-performing prior method). However, we also have realized that it is important to compare with more methods. Therefore, **as shown in Table blow**, we have added parameter count comparisons with other relevant methods. **In addition, it is worth noting that for the latest methods like** **CuDA** **[6] and FISFN [7] mentioned above, we were unable to include parameter counts due to the lack of open-source code.** \\n\\nAs we can see from the table below, although PGMF-Student is with the second smallest parameter size only to Self-MM (a simple, direct and effective method), it achieves better performance. This demonstrates the effectiveness of our proposed framework and shows that PGMF achieves a balance of performance and parameters. In addition, we have also achieved significant improvements compared to the Transformer-based methods with few parameters.\\n\\n| SIMS | | | | | | |\\n| ------------ | --------- | ------------------------------ | ------------------------- | ------------------------- | -------------- | -------------- |\\n| Method | Parameter | Transformer-based Architecture | Acc-2 | F1 | MAE | Corr |\\n| TFN | 35.63M | | 78.12\\u00b11.56 | 77.83\\u00b11.62 | 0.434\\u00b11.12 | 0.579\\u00b11.50 |\\n| MISA | 21.66M | \\u2714\\ufe0f | 77.72\\u00b11.10 | 76.54\\u00b11.67 | 0.451\\u00b11.83 | 0.570\\u00b11.95 |\\n| Self-MM | 0.38M | | 77.94\\u00b11.11 | 77.72\\u00b10.68 | 0.418\\u00b11.05 | 0.589\\u00b11.54 |\\n| TETFN | 1.53M | \\u2714\\ufe0f | 80.18\\u00b10.49 | 79.34\\u00b10.52 | 0.422\\u00b11.30 | 0.588\\u00b11.71 |\\n| ALMT | 2.60M | \\u2714\\ufe0f | 79.91\\u00b10.29 | 80.17\\u00b10.60 | 0.421\\u00b10.69 | 0.583\\u00b10.70 |\\n| PGMF-Teacher | 2.54M | \\u2714\\ufe0f | **83.06\\u00b10.95** | **84.06\\u00b10.43** | **0.370\\u00b10.50** | **0.690\\u00b10.80** |\\n| PGMF-Student | 0.82M | \\u2714\\ufe0f | 81.40\\u00b11.58 | 81.85\\u00b11.41 | 0.382\\u00b11.39 | 0.662\\u00b11.26 |\\n| **MOSI** | | | | | | |\\n| Method | Parameter | Transformer-based Architecture | Acc-2 | F1 | MAE | Corr |\\n| TFN | 9.50M | | 77.38\\u00b11.37/78.11\\u00b10.60 | 77.35\\u00b11.33/78.02\\u00b10.57 | 0.949\\u00b13.13 | 0.662\\u00b11.95 |\\n| MISA | 1.14M | \\u2714\\ufe0f | 80.93\\u00b10.99/81.05\\u00b10.83 | 80.90\\u00b11.03/81.01\\u00b10.87 | 0.773\\u00b11.81 | 0.775\\u00b10.63 |\\n| Self-MM | 0.16M | | 82.94\\u00b10.63/83.18\\u00b10.35 | 82.95\\u00b10.63/83.09\\u00b10.36 | 0.717\\u00b11.53 | 0.792\\u00b10.55 |\\n| TETFN | 1.25M | \\u2714\\ufe0f | 80.87\\u00b10.52/80.82\\u00b10.53 | 80.87\\u00b10.52/80.82\\u00b10.53 | 0.726\\u00b11.68 | 0.791\\u00b10.86 |\\n| ALMT | 2.50M | \\u2714\\ufe0f | 83.00\\u00b10.22/85.12\\u00b10.20 | 83.00\\u00b10.22/85.19\\u00b10.27 | **0.713\\u00b10.75** | 0.795\\u00b10.54 |\\n| PGMF-Teacher | 1.45M | \\u2714\\ufe0f | **85.05\\u00b10.66/86.61\\u00b10.69** | **85.15\\u00b10.66/86.69\\u00b10.69** | 0.734\\u00b11.46 | **0.797\\u00b10.60** |\\n| PGMF-Student | 0.53M | \\u2714\\ufe0f | 83.62\\u00b10.91/85.37\\u00b11.00 | 83.68\\u00b10.96/85.50\\u00b10.96 | 0.746\\u00b11.63 | 0.775\\u00b11.10 |\"}", "{\"title\": \"Reply to authors\", \"comment\": \"I thank the authors for addressing my comments. I am raising my score.\"}", "{\"title\": \"Follow-Up on Rebuttal of Reviewer PNKv\", \"comment\": \"Dear Reviewer PNKv,\\n\\nWe hope you are doing well. We are writing to follow up on the response to our submission. We appreciate your time and effort in reviewing our work. We know this is a busy time, but we would appreciate your kindly seeing our response. **We would appreciate it if you would consider re-evaluating our paper.**\\n\\nIf there are any remaining questions or if further clarification would be helpful, please do not hesitate to let us know.\\n\\n**We are looking forward to your response.**\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer 9bUK\", \"comment\": \"Thank you for taking the time to review our rebuttal and for your feedback.\\n\\nWe respect your perspective on algorithmic innovation being a core criterion for ICLR submissions. **However, we also believe that impactful contributions of a paper have diverse forms, including framework development, benchmarking, and empirical insights.** Our proposed PGMF first integrate MLLMs with task-specific models, providing a novel and efficient way for better MSA.\\n\\nFinally, we do sincerely thanks for your time, feedback and the opportunity to discuss our work.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Follow-up on Review Feedback\", \"comment\": \"Dear Reviewers,\\n\\nI hope this message finds you well. We sincerely appreciate the time and effort you have dedicated to reviewing my work. As the rebuttal deadline approaches, **we would greatly appreciate it if you could kindly provide feedback on our responses to your initial comments**. Your insights would be invaluable in helping us refine and strengthen our work.\\n\\nThank you very much for your time.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer PNKv,\\n\\nThank you for adjusting your rating and supporting our work. We sincerely appreciate the opportunity to improve our submission and are grateful for the time and effort you have dedicated to reviewing it.\\n\\nSincerely,\\n\\nThe Authors\"}" ] }
Bz9wjvToCS
DiffDeID: a Multi-conditional Diffusion-based Method for High Fidelity Face De-indentification with Diversity
[ "Yanzhuo Wei", "Yu Pan" ]
Face de-identification is a critical task that aims to obscure true identities while preserving other facial attributes. Current methodologies typically involve disentangling identity features within a latent space and leveraging adversarial training to balance privacy with utility, often at the cost of a trade-off between two. To surmount these limitations, we introduce DiffDeID, a novel approach grounded in diffusion models. This method incrementally safeguards identity and sustains utility, all while ensuring enhanced interpretability. Our method employs a Latent Diffusion-based ID Sample to generate authentic identity embeddings that are obfuscated from the original identity, thereby providing users with diverse options. Additionally, a multi-condition diffusion model is utilized for facial images, ensuring the retention of image utility. We further introduce a novel training and inference paradigm, utilizing the unified architecture tailored for video facial de-identification tasks. The robustness of our method is attributed to its powerful 3D prior and meticulous generation design, enabling natural identity protection, generation of high-quality details, and robustness across various attributes. Through extensive experimentation, we demonstrate that DiffDeID surpasses previous methodologies.
[ "Face De-identification", "Data privacy", "Diffusion Model" ]
https://openreview.net/pdf?id=Bz9wjvToCS
https://openreview.net/forum?id=Bz9wjvToCS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tsFhf1xM8m", "UlhYKAHI4C", "TdTezfpf5i", "OTLxjDMfxp", "HhvJvNHR6x", "44FKABr87d" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730641609774, 1730345546771, 1729319152707, 1731412093493, 1732259426780, 1730701336306 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9230/Reviewer_CmV1" ], [ "ICLR.cc/2025/Conference/Submission9230/Reviewer_uDWy" ], [ "ICLR.cc/2025/Conference/Submission9230/Reviewer_5tX1" ], [ "ICLR.cc/2025/Conference/Submission9230/Reviewer_5Cyb" ], [ "ICLR.cc/2025/Conference/Submission9230/Authors" ], [ "ICLR.cc/2025/Conference/Submission9230/Reviewer_6A3g" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces a novel diffusion-based approach, called DiffDeID, suitable for de-identification of face images. The approach relies on four main components, including a facial mask predictor, a 3D Morphable Model (3DMM) for defining facial features, a Latent Diffusion-based ID sampler, used for constructing diverse de-identified identity embeddings, and lastly a multi-conditional diffusion model responsible for producing realistic face images. The de-identification pipeline first entails the extraction of 3D Morphable Model (3DMM) coefficients from an input face image, which include the identity, expression, texture, illumination and pose. In the next step, the identity coefficient is replaced with a de-identified coefficient produced by the Latent Diffusion-based ID sampler. The multi-conditional diffusion model then considers the coefficients and the masked face image to produce replacement face images with a de-identified subject. The proposed solution is trained using the CelebAMask-HQ dataset and its de-identification capabilities are evaluated with the FaceForensics++ dataset. Throughout the experiments the authors showcase that DiffDeID outperforms the state-of-the-art in terms of de-identification while preserving attributes unrelated to identity. The suitability for video de-identification is also explored, by evaluating the proposed solution on the VoxCeleb1 dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper is written in a clear and concise manner. Differently from existing approaches, DiffDeID utilizes a Latent Diffusion-based ID sampler for sampling new anonymized identity embeddings. When defining 3DMM coefficients the paper also take into account the gaze direction of subjects, which is lacking in the original implementation, despite being a rather crucial feature. With a new training and inference procedure the proposed architecture also achieves video face de-identification, preserving information not related to the identity. The performed experiments entail both qualitative and quantitative results, exploring the similarity of input and generated identities as well as the preservation of other face coefficients. The approach is compared to three existing solutions where it achieves better results across all measured metrics. Notably, the approach allows for drastically better video face de-identification, which represents a crucial challenge in the real-world application of de-identification methods.\", \"weaknesses\": \"Despite the listed strengths, the paper suffers from a few major weaknesses, which should be addressed to improve the overall quality of the paper.\\n\\n1.\\tThe methodology section lacks suitable references to existing works. These are crucial for identifying aspects of the work that are novel and for giving credit to original authors. The experiments section also lacks references to the state-of-the-art that DiffDeID is compared to. Existing approaches and their potential weaknesses should also be explored in more detail in the paper.\\n\\n2.\\tFigure 1. should be better annotated to allow the reader to more easily connect parts of the described pipeline with the figure. Additionally, all parts of the pipeline should be depicted, including the input images and the 3DMM model. These changes would allow for better and easier understanding of the proposed approach. The plots and text in Figure 4 are also too difficult to read, due to their small scale. Their readability should be improved by making the figures larger or at least increasing the font size. \\n\\n3.\\tThe experiments section could be improved introducing the metrics in more detailed manner. For example, peak signal-to-noise ratio (PSNR) is reported in Table 1. but is never mentioned in the paper. It would also be highly beneficial to utilize more metrics to showcase the suitability of the DiffDeID approach. For example, Face Image Quality Assessment (FIQA) measures could reveal valuable insight into the quality of de-identified images [1]. Furthermore, genuine and imposter distributions along with corresponding measures (e.g. Equal Error Rate, Fisher\\u2019s Discriminant Ratio), might also provide additional information regarding the separability of real and de-identified identities. \\n\\n4.\\tThe paper would also benefit from additional ablation studies in the supplementary material for different aspects of the proposed pipeline. For example, the masking of the mouth could be qualitative evaluated. Similarly, the influence of different 3DMM coefficients not related to identity could also be evaluated, e.g. showcasing the images generated without and with control over the different coefficients.\\n\\n5.\\tWhen evaluating video face de-identification it would be beneficial to showcase qualitative samples of state-of-the-art approaches in Figure 5. alongside samples generated by DiffDeID.\", \"questions\": \"The main points that should be addressed are listed in the weaknesses. Here are a few questions that might spark some discussions:\\n\\n1.\\tIn the paper you focus on preserving various aspects of faces (e.g. pose), while changing the identity of subjects. Have you perhaps explored how your approach influences other soft biometric features, for example the color of skin? Would it be beneficial to provide such control in the future? Can you provide any comments?\\n\\n2.\\tWhen discussing video de-identification the paper does not mention the time requirements of the method. Can DiffDeID perhaps be ran in real-time, e.g. at a lower level of de-identification? How about other existing approaches? Can you perhaps elaborate more on this topic or even mention this aspect in the paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a multi-conditional Diffusion-based method for face de-identification. The method employs a Latent Diffusion-based ID Sample to generate authentic identity embeddings that are obfuscated from the original identity. Also, a 3D prior is used to improve the robustness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"A Latent Diffusion-based ID Sampler based on DDIM is proposed, which can sample multidimensional identity embeddings with diversity and realism\", \"DiffDeID pipeline based on a multi-conditional diffusion model for face de-identification is proposed, which can preserve the pose and expression information of the source face image while replacing the identity information\"], \"weaknesses\": [\"For quantitative measures, there is no analysis for the face recognition performance in Table 1, which I think is the most crucial experiment to demonstrate the face de-indentification.\", \"The visualization is not satisfactory. In Fig. 3, even for level 1, all the faces look different from the original images. Subjective user studies are suggested to consolidate this part.\", \"No ablation studies to evaluate each proposed component. In particular, I am curious about the difference between with and without 3DMM.\"], \"questions\": \"See the Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors propose a diffusion-based de-identification method. Specifically, they aim to replace the identity information in the source images with a synthetic face while preserving original facial attributes such as hair and movement, thereby protecting the privacy of the individuals in the source images. To achieve this, they introduce a multi-condition diffusion model that can sample a person\\u2019s identity. Additionally, to facilitate identity swapping, they employ an identity mask to obscure the original facial features.\\n\\nOverall, the de-identification method proposed in this paper has merits, but several concerns remain. Please refer to the weaknesses.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"A diffusion-based de-identification method, which leverages multidimensional identity embeddings, contributes to enhancing user privacy protection.\", \"Multiple experiments demonstrate the effectiveness of the method.\"], \"weaknesses\": [\"The distinction between de-identification and face swapping: The method proposed in this paper essentially appears to be a form of face swapping, i.e., the well-known deepfake. The key difference is that in deepfake technology, the target face is a real individual, while in this work, the target face is derived from a 3DMM. However, this distinction does not create a clear boundary between the two approaches. The reviewer suggests that the authors should compare their method with state-of-the-art face swapping algorithms.\", \"Although the experiments demonstrate the high quality of the synthesized faces, there is still a lack of quantitative experiments showing the effectiveness of this method in de-identification. The authors propose a de-identification ID metric, but this metric is not consistently highlighted across all experiments.\", \"Another concern relates to the practical application. in what scenarios would users need to generate a video with a completely different identity, especially one that is neither animated nor intentionally altered for comedic or entertainment purposes?\", \"Regarding ethical concerns: Since the proposed method is somewhat related to deepfake technology and involves higher-precision face swapping, it may pose potential risks for harmful applications.\"], \"questions\": \"Please see weaknesses\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": [\"Regarding ethical concerns: Since the proposed method is somewhat related to deepfake technology and involves higher-precision face swapping, it may pose potential risks for harmful applications.\"], \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents a de-identification method call DiffDeid which aims to generate high-fidelity de-identified faces with a balance between anonymity and utility. Diverse identities are generated through sampling on the latent space of a diffusion model.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Authors have explored the possibility of performing de-identification tasks with diffusion models.\", \"weaknesses\": \"-Writing\\nThe motivation of the paper is not good enough. Why we need \\n-Experimental results\\n1.Current state-of-the-art diffusion models can generate images with a resolution of 512x512. Authors only show result on images of 256px.\\n2.The function of several components, which authors claimed to be useful, are not evaluated. There are no ablation study section.\\n-Comparison with existing baselines\\n3.The provided method only compare with Deepprivacy, CIAGAN and Repaint, the newest of which is publish in 2020. Several important methods are missing, e.g. FiT [1], RiDDLE [2]. \\n4.Even in the methods mentioned in the paper, the qualitative results are not shown. Authors should put faces generated from different methods in a single image so reviewers can make a direct judgement\\n[1] Gu et al Password Conditioned Face id transformer\\n[2] Li et al. RiDDLE, Reversible and De-diversified De-identification with Latent Encryptor\\n-Novelty\\nExcept for the latent diffusion models incorporated in the framework, few novelty is found in the paper. Current SOTA de-id methods can achieve anonymization, which the presented method can not do. \\n-Figure\\nI suggest authors to re-write the paper and re-draw some important figures. Many figures in the paper are either too small for me to see the texts(e.g. Fig. 4) or so large that occupies too much space.(e.g. Fig.1)\", \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents a face de-identification method that utilizes a multi-condition diffusion model. The proposed method employs the Latent Diffusion-based ID sampler to generate diverse and realistic identity embeddings, which are then used as anonymized identity embeddings for subsequent replacement. Furthermore, a multi-condition diffusion model is employed for facial images to ensure the preservation of image utility.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The utilization of a Latent Diffusion-based ID Sampler enables the generation of diverse and realistic identity embeddings, serving as anonymized identity embeddings for subsequent substitution.\\n2. A Multi-Condition Diffusion Model is utilized on facial images to guarantee the maintenance of image utility.\", \"weaknesses\": \"1. The proposed method neglects the importance of identity recovery. Face de-identification methods should have the ability to recover the original faces when security conditions are satisfied. Otherwise, the distinction between the face de-identification method and face swapping algorithms would be minimal, as face swapping algorithms involve replacing identities while preserving consistency in expressions and poses.\\n2. The conducted experiments were not sufficiently comprehensive, and there was a lack of ablation experiments to clarify the specific effects of the proposed method.\", \"questions\": \"1. The proposed method neglects the importance of identity recovery. Face de-identification methods should have the ability to recover the original faces when security conditions are satisfied. Otherwise, the distinction between the face de-identification method and face swapping algorithms would be minimal, as face swapping algorithms involve replacing identities while preserving consistency in expressions and poses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
Bz6eAiOjrI
Orator: LLM-Guided Multi-Shot Speech Video Generation
[ "Jiaben Chen", "Yang Fu", "Ailing Zeng", "Zixin Wang", "Siyuan Cen", "Xueyang Yu", "Julian Tanke", "Yihang Chen", "Koichi Saito", "Yuki Mitsufuji", "Chuang Gan" ]
In this work, we propose a novel system for automatically generating multi-shot speech videos with natural camera transitions, using input text lines and reference images from various camera angles. Existing human video generation datasets and methods are largely centered on faces or half-body single-shot videos, thus lack the capacity to produce multi-shot full-body dynamic movements from different camera angles. Recognizing the lack of suitable datasets, we first introduce TalkCuts, a large-scale dataset containing over 500 hours of human speech videos with diverse camera shots, rich 3D SMPL-X motion annotations, and camera trajectories, covering a wide range of identities. Based on this dataset, we further propose an LLM-guided multi-modal generation framework, named Orator, where the LLM serves as a multi-role director, generating detailed instructions for camera transitions, speaker gestures, and vocal delivery. This enables the system to generate coherent long-form videos through a multi-modal video generation module. Extensive experiments show that our framework successfully generates coherent and engaging multi-shot speech videos. Both the dataset and the model will be made publicly available.
[ "Speech video generation", "Multimodal video generation", "Human video dataset", "LLM-directed human video synthesis" ]
Reject
https://openreview.net/pdf?id=Bz6eAiOjrI
https://openreview.net/forum?id=Bz6eAiOjrI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "s3YDr1kEEV", "nu33pAkqO4", "n6iDIBDDWn", "jdG6n4d1Ez", "TJ2jMqMxc2", "FdTWciTacM", "92WqW7iPz3" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "meta_review", "decision", "official_review" ], "note_created": [ 1730730247409, 1730290482769, 1730561000702, 1730550554159, 1734591217288, 1737523406158, 1730257444433 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission608/Reviewer_uS7p" ], [ "ICLR.cc/2025/Conference/Submission608/Reviewer_A2Pw" ], [ "ICLR.cc/2025/Conference/Submission608/Reviewer_b2gi" ], [ "ICLR.cc/2025/Conference/Submission608/Reviewer_8WEn" ], [ "ICLR.cc/2025/Conference/Submission608/Area_Chair_Xreg" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission608/Reviewer_J3eb" ] ], "structured_content_str": [ "{\"summary\": \"This work is technically novel and interesting. The proposed Orator system adopts the LLM guided multimodal generation framework, which can automatically coordinate the camera transitions, speaker gestures and voice outputs to generate coherent and attractive multicamera speech videos. At the same time, they also created a new large-scale dataset, TalkCuts, which contains hundreds of hours of richly annotated multicamera speech videos, which is useful for research in related fields. However, the writing and experimental parts of the paper still need some improvement.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"A new task is proposed: speech video generation with dynamic camera switching. The first large-scale dataset dedicated to this task, TalkCuts, was created, and a novel multimodal generation framework, Orator, was proposed, in which DirectorLLM acts as a multi-role director to guide the process. These are all highly original contributions.\", \"weaknesses\": \"Generally speaking, the paper is understandable, but some details are not clear enough, for example, the mechanism of how DirectorLLM directs the work of each module could be explained in more detail. The experimental part also lacks more quantitative and qualitative results to support the claimed advantages.\", \"questions\": \"The overall methodology is reasonable, but there is still a lack of justification in some details, such as the lack of more experimental results to quantitatively assess the metrics of authenticity, coherence, and diversity of the generated videos.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Orator, a system for generating multi-shot speech videos with natural camera transitions. This paper introduces a large-scale dataset featuring over 500 hours of speech videos with multi-angle shots, 3D motion annotations, and camera trajectories.\\nOn top of that, Orator leverages a large language model (LLM) as a \\u201cdirector\\u201d to generate detailed instructions for camera transitions, gestures, and vocal delivery, which guides a multi-modal video generation module to produce coherent, long-form speech videos.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper explores the problem of how to generate multi-camera speech videos with dynamic lens transitions, which extends a previous method for generating single-camera half-body videos.\\n2. The method is well supported by the introduction of TalkCuts. This dataset combined with a well-designed multimodal framework can generate multicamera speech videos with dynamic view transitions.\", \"weaknesses\": \"1. The data collection and annotation process is fully automated, with no manual verification involved. This raises concerns about the dataset's quality.\\n\\n2. I don\\u2019t see clear evidence of text similarity between the input speech scripts and the corpus that could be used for transition planning.\\n\\n3. In Lines 192-193, you mentioned that \\\"each identity is recorded with multiple diverse camera shots.\\\" Could you provide statistics on the number and type of shots for each identity, particularly for \\u201cvolg\\u201d?\\n\\n4. Please share the access link to your dataset. The model appears to have numerous modules, and without access to the code, it will be challenging to reproduce your results.\\n\\n5. L364-365: typo.\", \"questions\": \"What is the practical value and application of the multi-shot speech video generation\\uff1f\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The innovations of this paper include the Orator system and the TalkCuts dataset: the multimodal video generation module in the Orator system integrates the collaboration of multiple sub-modules, each with its own unique strengths, and the DirectorLLM acts as a multi-role director to guide the video generation; the TalkCuts dataset is characterized by its large scale, diversity, and rich annotation information, which provides a powerful support to multicamera speech video generation. The TalkCuts dataset is characterized by large scale, diversity and rich annotation information, which provides strong support for multi-camera speech video generation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. By integrating the multimodal video generation module and DirectorLLM, the Orator system achieves effective coordination and fine control of the video generation process, providing a new solution for multicamera speech video generation.\\n2. The TalkCuts dataset is large in size, rich in diversity and comprehensively labeled, which fills the gap of existing datasets in multi-camera speech video generation and provides important data support for the research.\\n3. Comprehensive experimental evaluations are conducted on several key tasks, with improvements over the baseline model, demonstrating the effectiveness and sophistication of the method.\", \"weaknesses\": \"1.the table does not have the best results in bold, I hope to pay attention to the details.\\n2.Although the experimental results show some advantages over the baseline model in some metrics, the demo shows that there is a noticeable abruptness when switching between shots as well as the human body does not have a good timing consistency\\n3.The dataset is primarily focused on the speech domain, which may limit the generalizability of the model, and may have more application scenarios in other domains such as movies.\", \"questions\": \"1. I checked your online demo and noticed that your method has a noticeable effect on the background for the body, similar to a change in light and shadow? What is the reason for this?\\n2. For the model combining Stable Video Diffusion and ControlNeXt in the VideoGen module, how to balance the contribution of the two models during the training process to achieve the best video generation results?\\n3.When there are ambiguities or conflicts in the instructions given by DirectorLLM, how does the system handle them to ensure coherent and reasonable video generation?\\n4. For the large number of videos collected, what are the specific screening criteria and process in the manual screening process? How to ensure the consistency and accuracy of screening?\\n\\n\\nGenerated videos were not seamlessly integrated in terms of interaction with props and environments, such as the lack of speaker interaction with the microphone or walking on stage, limiting the naturalness of the videos.\\nNoted that audience engagement elements such as eye contact, gaze shifts, and facial expressions were difficult to capture and simulate because of the lack of audience cues. Although the system can handle multi-camera transitions, it has not yet incorporated moving camera dynamics, which affects the realism of the video.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to achieve end-to-end generation of speech videos by combining a large language model (LLM) and a diffusion model. The input only needs to contain speech text and pictures of different shots, and the final output is a speech video (with shot switching). By combining a series of existing methods, the author presents some generated results and verifies the feasibility of the task.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The task that the author hopes to complete is quite interesting and also very difficult. The generated results provided to some extent illustrate the feasibility of the task.\", \"The video generation part implemented by the author has certain advantages in the synthesis of hand details compared with some existing methods.\"], \"weaknesses\": [\"The overall effect is not good. In the synthesized final speech video, there are obvious artifacts in the lip-sync accuracy and the identity change when switching shots. In the last two examples on the homepage, the audio and lip movements are poorly synchronized, with many syllables lacking corresponding mouth movements. Additionally, there are issues with maintaining appearance consistency, such as blurry hair.\", \"The novelty is limited. The video generation part is implemented by combining SVD and ControlNet. It is within expectations that better results can be achieved by training on speech video data. Perhaps a more end-to-end model design would be more innovative. For example, removing SMPL as an intermediate representation and eliminating explicit shot state representations could reduce information loss during intermediate transmission and produce more natural results.\", \"There is a lack of any overall effect evaluation. The author only made performance comparisons of the results for each sub-module. However, the comparisons of video generation and speech to gesture generation are both cases where the fine-tuned model has performance advantages over the baseline. This is very normal and not a difficult thing.\"], \"questions\": \"(1) In the two complete generated results displayed on the homepage, are all the IDs in the source images untrained? That is, is it a completely one-shot setting? I haven't found an explanation for this. If it is omitted, please let me know. Thanks.\\n\\n(2) I haven't found an evaluation of the final generated video, which is the purpose of this paper. There are many evaluations of specific sub-modules, but there is not much novelty contribution in terms of method design. The evaluation method of the final generated video may also be an important content of this paper and should be discussed in detail and be self-contained.\\n\\n(3) For the evaluation of the final generated video, although there is a lack of directly comparable methods, I believe there are two key aspects that need to be assessed. These aspects determine whether the proposed method in this paper is (1) a stitched-together approach that combines existing methods, or (2) a functional framework prototype. Firstly, regarding the quality of the synthesis, it can be evaluated using objective metrics such as PSNR and pretrained video quality assessment models comparing with real speech videos, as well as subjective scoring to compare the current performance gap. Secondly, concerning the usability rate of the synthesis results, we should assess how many of the randomly generated results do not exhibit severe artifacts (which can be considered completely non-functional), such as body deformities, meaningless shot cuts, and extremely low lip-sync accuracy. For the proposed speech video generation task, currently having only two video results is insufficient.\\n\\nOverall, this paper addresses a very interesting and highly challenging task. However, the experimental section is not sufficiently comprehensive. It lacks extensive experimental results to validate the effectiveness of our approach in generating engaging and realistic multi-shot speech videos. Instead, it primarily demonstrates improvements of individual components over baselines in their respective tasks. There is no detailed experimentation on whether the approach can effectively and holistically generate engaging and realistic multi-shot speech videos. This additional validation will be very important.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents an Orator system that adopts the LLM guided multimodal generation framework, which can automatically coordinate the camera transitions, speaker gestures and voice outputs to generate multi-camera speech videos. A new large-scale dataset, TalkCuts, is provided which contains annotated multi-camera speech videos.\\n\\nHowever, reviewers expressed concerns on the overstatement of the paper and the visual results. \\nSome obvious limitations have been pointed out by the reviewers, such as the requirement to provide a reference image for each camera shot. Furthermore, reviewers b2gi, 8WEn, and J3eb all spotted the demo issues. Due to these issues, the rebuttal did not change the mind of reviewers though the writing has improved during revision. \\n\\nAC made the decision due to the obvious limitations of this work and noticeable artifacts in the results.\", \"additional_comments_on_reviewer_discussion\": \"This paper received mixed reviews. After the rebuttal, reviewers have not changed their mind due to the weakness pointed out by reviewers. In particular, reviewers have cross-check others' comments.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"In this method, the user needs to input a reference set containing the desired shots and text lines. A DirectorLLM then serves as a multi-role director, controlling the motion, camera transitions, and audio of the generated video. The authors also introduce a new dataset, TalkCuts, to support community research. The experimental results show promising outcomes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. They utilize a DirectorLLM to control the camera transition, motion, and audios. In this way, the story of generated video should be more realistic compared with mannually control.\\n2. They contribute a more powerful dataset TalkCut.\\n2. They show promising results in experiment part.\", \"weaknesses\": \"1. The temporal consistency of results in their project page needs improvement.\\n2. It seems that SMPL-X doesn\\u2019t model the mouth region, which makes that part look unnatural.\\n3. The Retrieval-based Augmentation Generation in Figure 3 is unclear.\", \"questions\": \"1. In Figure 1, do you need to provide a reference image for each shot? For example, is it because the reference image you input shows a close-up shot, and DirectorLLM then guides the generation of a close-up clip based on that reference? If possible for Orator to produce novel-view clips? For example, given several full body reference images, the Orator generates a close-up shot clip.\\n2. If possible, the text lines can also be remove, just give a story brief and ask the DirectorLLM to generate the text lines according to the given story. More than that, DirectorLLM can also generate the description of the reference image by story and use a SD to generate reference images. \\n3. This system looks interesting. However, it gives me a feeling that this paper is collect those existing SpeechGen, MotionGen, VideoGen models and utilize DirectorLLM to control them. \\n4. I still love this paper, I would like upgrade my rating if author can give me more insight during rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NO\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BydkbNH0gj
LESS IS MORE: HIGH-VALUE DATA SELECTION FOR VISUAL INSTRUCTION TUNING
[ "Zikang Liu", "Kun Zhou", "Xin Zhao", "Dawei Gao", "Yaliang Li", "Ji-Rong Wen" ]
Visual instruction tuning is the key to building large vision language mod- els (LVLMs), which can greatly improve the task generalization and solving capa- bilities by learning a mixture of instruction data from diverse visual tasks. Previ- ous work mostly collects multiple existing visual instruction datasets via heuristic ways for training (even more than a million instructions), which may introduce data redundancy and enlarge the training cost. To investigate this issue, we con- duct a series of empirical studies, which reveal a significant redundancy within the visual instruction datasets, and show that greatly reducing the amount of instruc- tions from several tasks even do not affect the performance. Based on the findings, we propose a high-value data selection approach $\textbf{TIVE}$, to eliminate redundancy within the visual instruction data and reduce the training cost. In TIVE, we first estimate the instance influence score on its corresponding task, and the task dif- ficulty score, based on the gradient-based influence functions. Then, we leverage the two kinds of scores to determine the task proportion within the selected visual instruction subset, and select high-value instances for each task, respectively. Ex- periments on various LVLMs show that our approach using only about 15% data can achieve comparable average performance to the full-data fine-tuned model across eight benchmarks, even surpassing it on four of the benchmarks. Our code and data will be publicly released.
[ "Visual Instruction Tuning", "Data Selection" ]
Reject
https://openreview.net/pdf?id=BydkbNH0gj
https://openreview.net/forum?id=BydkbNH0gj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zW0dBWoVdi", "tukjPsJXOO", "tYC5y1Fpql", "tIB6jGq8dR", "sBBwfXFPm1", "rzldKnIgcX", "qzqzhuBORb", "qkd8TUWmNV", "qMpBl0YzqU", "pf52oZMPzb", "pPm3CzumKV", "mzPYz0KHrM", "mTwd630KEL", "m54nc1pB6J", "l5D2lxFfNy", "hU9wICBiQq", "fsp95kVaFs", "fB1HUNPfQS", "eHzSG7xC1C", "bf6pd7YJx0", "bOO6M81TG5", "aexy8DjGw7", "VF4lVLYz3C", "SwrK6NYRMb", "RpIrMlVTK2", "R8SrZFPlA0", "Pe4Oevh5TD", "N5En1ESVkc", "IrZdI3Hseg", "IKH6GVNJGR", "FudNT6ddlA", "95pK7MdMTe", "8PNv9J7EKj", "7m6mAmDV4q", "5ETcrgGdJZ", "44Hn4v4ese", "3MagLEdQJ0", "29wPfNAk3O", "1fJCCDNPgl", "074JXKKUh9" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1730694674718, 1732093231402, 1732760060067, 1732849215613, 1733027349094, 1732209550116, 1732612024682, 1732092920408, 1732092361416, 1732758870556, 1732094902416, 1732280331629, 1737523487772, 1732791158401, 1730104610544, 1732094291956, 1730417799098, 1732095037012, 1733196262914, 1732552308277, 1732502229293, 1733218225808, 1732258917664, 1732733319213, 1732094750939, 1733196279549, 1733027331106, 1732258253664, 1732094030620, 1732502313736, 1732094130046, 1732094442116, 1732502260693, 1732284900393, 1732502346491, 1732093300102, 1730710212331, 1732761310253, 1732759533148, 1733377903157 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_P1cf" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_P1cf" ], [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_i5DE" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_P1cf" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_P1cf" ], [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_g2yS" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_i5DE" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_P1cf" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_P1cf" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_WdBE" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Reviewer_WdBE" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Authors" ], [ "ICLR.cc/2025/Conference/Submission2127/Area_Chair_WCGw" ] ], "structured_content_str": [ "{\"summary\": \"This work studied the redundancy problem in the visual instruction tuning dataset for LVLMs. It proposed a high-value data selection approach TIVE, to eliminate redundancy within the visual instruction data and reduce the training cost. HIVE can effectively reduce the training data size of different VLM instruction tuning datasets across different models without compromising the overall performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The observation of the dataset redundancy problem aligns with the community's observations.\\nThe proposed TIVE method sounds reasonable.\\nThe authors conduct extensive experiments with detailed analysis to demonstrate the effectiveness of the method and its components.\", \"weaknesses\": \"1. Though the experiments are comprehensive, there are several points to further discuss or clarify in the method. See questions.\\n2. The authors need to provide a further discussion on the overall cost of the method: as TIVE needs the reference model trained with warmup data, the selection of TIVE is generally model-specific. TIVE needs to compute the LoRA gradient over all samples in the pool, then this cost is close to training on all of the data with LoRA. Tuning the hyper-parameters of HIVE would give another dimension of complexity if there are no default hyper-parameters. From this perspective, this method may fail to reduce the overall training costs. If so, it needs to target improving the final performance (without insisting on 15% of data) and discuss more about how to achieve this (what proportion of data is the best?). If not, the corresponding additional cost should be discussed.\", \"questions\": \"1. In algorithm 1, the task influence is calculated in a nested for loop, with a overall complexity $O(|D_i|^2)$ for each task. A question is, could the author first use one pass to aggregate the average of the normalized gradients and then use another pass to calculate the score? This will reduce the complexity to linear. Will this cause numerical instability or it doesn't? Originally, was the gradients stored or re-computed?\\n2. Are the influence scores' gradient of a sample computed over all tokens in it and do average, or only on outputs part? \\n3. In the line 301, $\\\\lambda$ is introduced as \\\"We use a hyperparameter $\\\\lambda$ to control the temperature of the weight distribution\\\". However, how actually it is used is presented in Line 871 in appendix. The ablation of $\\\\lambda$ appears before readers know how actually it is used. The ordering of this part needs further consideration.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response to Reviewer WdBE (Part 3/4)\", \"comment\": \"> [W3 & Q2] The overall performance in Table 1 against backbone models is weak, only shows significant improvement on the SciQA benchmark, and gains accuracy drop or fair on other benchmarks (may be due to experiment uncertainty). This may mean the selection approach is sub-optimal.\\n\\n* Performance of TIVE and TIVE-efficient compared to other baselines at 50% sample rate\\n| Method | MME-P | MMBench | SEED-I | SQA-I | Avg. |\\n| ------------------ | ---------- | -------- | -------- | -------- | -------- |\\n| LLaVA-1.5 | 1510.7 | 64.3 | 66.1 | 66.8 | 68.2 |\\n| Random | 1458.2 | 63.2 | 63.8 | 68.2 | 67.0 |\\n| GraNd | 1462.6 | 63.8 | 63.2 | 67.6 | 66.9 |\\n| LESS | 1488.4 | 64.6 | 64.0 | 69.2 | 68.0 |\\n| **TIVE** | **1506.1** | **66.7** | **66.2** | **69.6** | **69.4** |\\n| **TIVE-efficient** | **1500.8** | **66.3** | **66.1** | **69.8** | **69.3** |\\n\\n* Total time cost for TIVE and TIVE-efficient at 50% sample rate\\n\\n| | Total Time Cost |\\n| ------------------- | ----------------------------------------- |\\n| TIVE | ~ 0.9h + 9.6h + 0.2h + 5.7h = 16.6h |\\n| TIVE-efficient | ~ 0.9h + 1.0h + 0.6h + 0.2h + 5.7h = 8.4h |\\n| Full-data fine-tune | ~ 11.5h |\\n\\nTo highlight the data redundancy issue in the dataset, we set a rather aggressive data pruning rate (85%) in our paper. In traditional machine learning settings, pruning about 50% of the original dataset would almost certainly result in a significant decrease in model performance even on simple tasks such as image classfication [1, 2, 3, 4, 5]. In contrast, we can also use lower rate to guarantee the performance. To validate it and address the reviewer's concern, we conduct experiments at a lower pruning rate (50%). We present the results and the overall cost of TIVE in this setting in the tables above. Under this setting, both TIVE and TIVE-efficient achieve similar or better results on almost all benchmarks compared to full-data fine-tune, and outperforms other data selection baselines by a large margin. In addition to this, at the pruning rate of 50%, TIVE-efficient still has a lower overall time cost compared to full-data fine-tune, but achieves better results. Interestingly, as the pruning rate decreases (from 85% to 50%), the performance gap between TIVE-efficient and TIVE becomes smaller. This might indicate that TIVE-efficient can be a better alternative of TIVE with we are allowed to retain a relatively large amount of samples.\\n\\n**Reference:**\\n\\n[1] Toneva, Mariya, et al. \\\"An empirical study of example forgetting during deep neural network learning.\\\" *arXiv preprint arXiv:1812.05159* (2018).\\n\\n[2] Coleman, Cody, et al. \\\"Selection via Proxy: Efficient Data Selection for Deep Learning.\\\" *International Conference on Learning Representations*. 2020\\n\\n[3] Paul, Mansheej, Surya Ganguli, and Gintare Karolina Dziugaite. \\\"Deep learning on a data diet: Finding important examples early in training.\\\" *Advances in neural information processing systems* 34 (2021): 20596-20607.\\n\\n[4] Killamsetty K, Sivasubramanian D, Ramakrishnan G, et al. Glister: Generalization based data subset selection for efficient and robust learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(9): 8110-8118.\\n\\n[5] Baldock R, Maennel H, Neyshabur B. Deep learning through the lens of example difficulty[J]. Advances in Neural Information Processing Systems, 2021, 34: 10876-10889.\"}", "{\"comment\": \"Thank you immensely for your active participation and valuable insights during the discussion phase. Your feedback is crucial and has greatly aided in improving the quality of our paper. We will continue to refine our research based on your suggestions, even if we cannot further revise the paper. However, we have a point of confusion. As the reviewer points out, finding the optimal amount of data for visual instruction tuning is very important. Yet, this heavily depends on the actual computation budget and the specific target task, making it difficult to identify an optimal sampling rate to achieve the desired effect. Can we interpret this as, for a specific target task, is it possible to find an optimal sampling rate that saturates the model's performance?\"}", "{\"comment\": \"Thank you for your suggestions. We are delighted to share some of our insights\\uff1a\\n\\n1. For the unselected samples, are these samples non-beneficial at the beginning? \\n\\nOur understanding is that the majority of the unselected data is also effective. We define effectiveness as whether a sample contributes positively to the overall task learning, which is measured by the instance-task influence. In our experiments, the vast majority of samples indeed have a positive influence, indicating that they are effective. However, due to sampling rate limitations, we can only select the most effective portion of the samples.\\n\\n2. Or if otherwise, why given some other samples presented, these samples are no longer beneficial? \\n\\nThis is a interesting question. In fact, given the other samples presented (as our selected samples) and trained, these unselected samples indeed become less beneficial as training progress. Our main finding is, most samples have very similar effect (measured via instance influence) on model training, even if they look like completely different samples. This results in a situation where if sample A and sample B initially contribute equally to task learning, the contribution of sample B to task learning will decrease after training on sample A, as the model has already learned the knowledge contained in this sample. This leads to the phenomenon that if we further train on B, the model's learning efficacy on this task would be minimal, and could even result in negative effects of overfitting on such samples. That's why we need to select a effective portion of samples to reduce redundancy.\\n\\n3. Are there methods to measure this influence and mitigate this problem, so that we can always construct a better dataset? \\n\\nIn our opinion, the redundancy issues exist in all datasets, with similar data points present in almost every dataset. These data points can always be learned from a small subset of samples, and training on all data would result in redundancy. In the era of large models, this problem becomes increasingly apparent. Given that large models inherently encompass extensive world knowledge and generalization capabilities, many data points may not provide any gain to the model and could be considered redundant. However, for more challenging tasks, the issue of redundancy tends to be alleviated. Our proposed TIVE strives to address the redundancy issue from this perspective. Our response to question 2 actually explain why we need to select the most effective samples, and why training on these subset might have better performance on the entire task dataset. However, as we notice, if a task is really hard, then the unselected samples might even still be beneficial after training on the selected subset. If we still consider the issue of sample A and sample B which contribute similarly to task learning. Suppose the overall task is very challenging, then even after training on sample A, the model still struggles to fully learn from such samples. At this point, sample B's contribution to task learning indeed decrease, but not to near zero (compared to less challenging tasks), indicating that it is still an effective sample. Therefore, we need to consider increasing the proportion of samples for this task, sampling more of these effective samples to learn such difficult tasks. This is also the reason we introduce the task-difficulty based value in our study.\\n\\n4. Are there some theoretical, even heuristics methods that can help us predict the performance saturation?\\n\\nThis is a really hard problem. From the perspective of training dynamics, indeed, we can predict whether the model's training has saturated, i.e., whether the loss has ceased to decrease. This can be estimated through influence estimation. If the influence of sample A on sample B is zero or negative, then training on sample A will not effectively reduce the model's loss on sample B. As our paper primarily focuses on improving the model's generalization, we did not use data from downstream tasks. However, if we aim to precisely estimate the model's performance on a specific downstream task, we would need to calculate the influence of the remaining samples on the downstream task data at every stage of the model's training. If no sample point effectively reduces the loss on the downstream task, then the model's training has already saturated. Although this approach is theoretically feasible, in practice, it requires repeated gradient calculations, leading to significantly higher computational costs. Moreover, a reduction in loss does not necessarily result in performance gains for the actual task and may even introduce potential overfitting.\"}", "{\"comment\": \"Dear Reviewer i5DE,\\n\\nThank you for your time and effort in reviewing our paper. We have addressed your comments and want to kindly remind you that the rebuttal period is ending soon. We look forward to discussing our responses and any other aspects of our research with you. Please let us know if you have any concerns.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thanks to the authors for the detailed response, which clarifies my points of concern. I would like to encourage the authors to include these discussions and results in the paper to make this work more solid and complete (especially how Tive-efficient can save cost, how original Tive can be used to improve performance; the linear implementation to avoid unnecessary complexity; also other points to clarify).\\n\\nAs there are still several days, I would encourage the authors to include the mentioned revision (and mark the changes with color). I may raise my score according to the adjusted content. Overall, I hold a positive view of this paper\\u3002\"}", "{\"title\": \"Official Response\", \"comment\": \"After reviewing the response and considering the comments from all other reviewers, I have decided to maintain my original negative score\"}", "{\"title\": \"Official Response to Reviewer WdBE (Part 2/4)\", \"comment\": \"We provide a continuation of our previous response to [W1 & Q1] here. We present the results of TIVE and TIVE-efficient below.\\n| Method | MME-P | MMBench | SEED-I | SQA-I | Avg. |\\n| ------------------ | ---------- | -------- | -------- | -------- | -------- |\\n| Random | 1386.5 | 61.8 | 61.9 | 68.4 | 65.4 |\\n| Length | 1413.0 | 59.3 | 61.2 | 69.2 | 65.1 |\\n| Perplexity | 1393.3 | 62.3 | 61.3 | 67.9 | 65.3 |\\n| GraNd | 1400.5 | 62.9 | 62.3 | 68.4 | 65.9 |\\n| EL2N | 1356.5 | 61.6 | 61.9 | 66.2 | 64.5 |\\n| **TIVE** | **1433.0** | **65.0** | **63.2** | **70.6** | **67.6** |\\n| **TIVE-efficient** | **1424.9** | **64.3** | **62.5** | **70.8** | **67.2** |\\n\\nAs we can observe, the performance of TIVE-efficient is similar to the original TIVE, and consistently outperforms other models. These experiments confirm that TIVE can be implemented in a more efficient way, without significant compromise in performance. We will supplement the design details and experimental results of TIVE-efficient in the revised version of our paper. We hope that this efficient implemetation can address your concern on the total cost of TIVE.\\n\\n---\\n\\n> [W2] The selection based on gradients is a posterior probability, which means choosing the hard samples as prior knowledge. This may be unfair for the comparisons against baselines.\\n\\n| Method | MME-P | MMBench | SEED-I | SQA-I | Avg. |\\n| -------------- | ---------- | -------- | -------- | -------- | -------- |\\n| Random | 1386.5 | 61.8 | 61.9 | 68.4 | 65.4 |\\n| Length | 1413.0 | 59.3 | 61.2 | 69.2 | 65.1 |\\n| Perplexity | 1393.3 | 62.3 | 61.3 | 67.9 | 65.3 |\\n| GraNd | 1400.5 | 62.9 | 62.3 | 68.4 | 65.9 |\\n| EL2N | 1356.5 | 61.6 | 61.9 | 66.2 | 64.5 |\\n| **MoSo** | **1410.2** | **62.6** | **62.4** | **68.1** | **65.9** |\\n| **LESS** | **1415.1** | **63.0** | **62.2** | **68.8** | **66.1** |\\n| TIVE | 1433.0 | 65.0 | 63.2 | 70.6 | 67.6 |\\n| TIVE-efficient | 1424.9 | 64.3 | 62.5 | 70.8 | 67.2 |\\n\\n\\nWe appreciate the reviewer's constructive suggestions regarding our experimental results. In fact, in the baselines we adopt, E2LN, GraNd, and Perplexity also utilize prior knowledge, and TIVE still achieves better performance compared to them. \\nTo make our results more convincing, we include two additional gradient-based data selection methods, MoSo[6] and LESS[7], as baselines. MoSo measures how empirical risk changes when a specific sample is removed from the original training dataset based on computed gradients, and LESS matches the gradient of training samples with validation sample to find the most effective data points. The supplementary experimental results are displayed in the table above. The results indicate that even compared to methods that also employ gradient features, TIVE and TIVE-efficient consistently achieve superior performance across almost all benchmarks. Actually, our advantage mainly lies in our consideration on data redundancy from both task-level and instance-level, while previous studies typically focus solely on the instance-level, overlooking the variations in task difficulty.\\n\\n**Reference:**\\n\\n[1] Kobayashi, Sosuke, et al. \\\"Efficient estimation of influence of a training instance.\\\" *arXiv preprint arXiv:2012.04207* (2020).\\n\\n[2] Kwon, Yongchan, et al. \\\"DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models.\\\" *The Twelfth International Conference on Learning Representations*.\\n\\n[3] Guu, Kelvin, et al. \\\"Simfluence: Modeling the influence of individual training examples by simulating training runs.\\\" *arXiv preprint arXiv:2303.08114* (2023).\\n\\n[4] Influence tuning: Demoting spurious correlations via instance attribution and instance-driven updates\\n\\n[5] Zhou, Kun, et al. \\\"JiuZhang3. 0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models.\\\" *arXiv preprint arXiv:2405.14365* (2024).\\n\\n[6] Tan, Haoru, et al. \\\"Data pruning via moving-one-sample-out.\\\" *Advances in Neural Information Processing Systems* 36 (2024).\\n\\n[7] Xia, Mengzhou, et al. \\\"Less: Selecting influential data for targeted instruction tuning.\\\" *arXiv preprint arXiv:2402.04333* (2024).\"}", "{\"title\": \"Official Response to Reviewer WdBE (Part 1/4)\", \"comment\": \"We sincerely thank the reviewer for their comprehensive review and helpful feedback. We will try to address your concerns below.\\n\\n> [W1 & Q1] The main weakness lies in the design of the approach, especially regarding the computation costs. In my recognition, the inference operation based on the gradients and other selection operations are costly, even meets the original training cost. This makes the contribution of the pruning method weak. Regarding this weakness, the authors are encouraged to provide the actual time cost for TIVE and fair comparisons with full training for LLaVA-1.5.\\n\\nWe appreciate your insightful suggestions regarding our methodology and understand your concern about the total time cost. We provide the actual time cost of TIVE and the comparsion of TIVE's overall time cost to full fine-tuning in the table below (on 8 x A100 80G GPUs).\\n\\n| | Total Time Cost |\\n| ------------------- | ----------------------------------- |\\n| TIVE | ~ 0.9h + 9.6h + 0.2h + 1.7h = 12.4h |\\n| Full-data fine-tune | ~ 11.5h |\\n\\nAs can be observed, warm-up training, gradient computation, task difficulty and instance influence estimation, and visual instruction tuning on selected subset spends approximately **0.9h**, **9.6h**, **0.2h**, **and** **1.7h** respectively. Although the time cost for visual instruction tuning(1.7h) is much lower compared to full-data training, the total time cost is slightly higher than full-data sft. \\n\\nDespite this, our work also has potential contributions to the field. Firstly, we verify the existence of redundancy in visual instruction data and effectively reduce this redundancy using a new proposed approach. Secondly, there are many existing methods that can be employed to accelerate TIVE [1, 2, 3, 4, 5]. For example, we can select a small subset of samples, compute their gradients, and use them to estimate task difficulty, instead of using the whole dataset. Besides, we can also use these computed influences to train a smaller model for predicting the instance influences of the remaining samples, instead of computing influences based on the large model.\\n\\nWe sincerely thank the reviewer for mentioning the efficiency issue of TIVE. To address it, **we propose a more efficient implementation of TIVE, TIVE-efficient** here. Firstly, We **only select 10% of samples and compute their gradients**. Then, we obtain the task difficulty based on these gradients and compute their instance influence. After that, we train a small model, LLaVA-Qwen-1.5-1.8B, for predicting instance influence of other samples. In this way, we slightly compromise the estimation precision for task difficulty and instance influence, but greatly reduce TIVE's time cost. we present the total time cost and evaluation results of TIVE-efficient in the table below.\\n\\n| | Total Time Cost |\\n| ------------------- | ----------------------------------------- |\\n| TIVE | ~ 0.9h + 9.6h + 0.2h + 1.7h = 12.4h |\\n| TIVE-efficient | ~ 0.9h + 1.0h + 0.6h + 0.2h + 1.7h = 4.4h |\\n| Full-data fine-tune | ~ 11.5h |\\n\\n\\n**Reference:**\\n\\n[1] Kobayashi, Sosuke, et al. \\\"Efficient estimation of influence of a training instance.\\\" *arXiv preprint arXiv:2012.04207* (2020).\\n\\n[2] Kwon, Yongchan, et al. \\\"DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models.\\\" *The Twelfth International Conference on Learning Representations*.\\n\\n[3] Guu, Kelvin, et al. \\\"Simfluence: Modeling the influence of individual training examples by simulating training runs.\\\" *arXiv preprint arXiv:2303.08114* (2023).\\n\\n[4] Influence tuning: Demoting spurious correlations via instance attribution and instance-driven updates\\n\\n[5] Zhou, Kun, et al. \\\"JiuZhang3. 0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models.\\\" *arXiv preprint arXiv:2405.14365* (2024).\"}", "{\"comment\": \"We express our gratitude once again for your time and feedback. We are keen to understand any additional concerns that might have prevented you from giving a higher score. We are more than willing to further address these concerns for your satisfaction. On the other hand, as other reviewers have increased their scores, we hope you might reconsider increasing the score. We look forward to your response.\"}", "{\"title\": \"Official Response to Reviewer g2yS (Part 1/2)\", \"comment\": \"We are sincerely thankful for the reviewer's insightful observations and suggestions. We will address these points in the sections below.\\n\\n> [W1 & Q1] Although the paper considers task difficulty in its method, it could provide a more in-depth analysis of the characteristics of different tasks and how they affect data redundancy and model performance. Is there any example analysis to see how the samples that were filtered out compare to those that were retained?\\n\\nFollowing the suggestion of the reviewer, we provide the proportion of each task within the selected subset after task-level reweighting. This proportion is demonstrated in the table below.\\n\\n| Task | General VQA | Multi-Choice VQA | Grounding | Visual Conversation | Text Conversation |\\n| ---------- | ----------- | ---------------- | --------- | ------------------- | ----------------- |\\n| Proportion | 45.5% | 35.3% | 11.4% | 5.5% | 2.2% |\\n\\nAs we can observe, the VQA data (both general and multi-choice) takes up the major proportion of the selected subset, followed by grounding data, whereas conversation-related data occupies the smallest proportion. This proves that tasks related to visual perception are the most difficult in visual instruction tuning, while visual conversation is relatively simple, and text conversation is the easiest since language models are already capable of tackling such tasks. These findings indicate that the central difficulty in visual instruction tuning still lies in endowing LLMs with the ability to comprehend visual content, hence, the weight of this type of data should be increased. \\n\\nInterestingly, in our empirical study, we discover that increasing the amount of certain task(such as General VQA, Multi-choice VQA) data significantly improve the model's performance, while scaling other task data(such as visual conversation and text conversation) did not. The tasks for which data scaling notably enhance model performance align with the most difficult tasks estimated based on our proposed task difficulty. This consistency also validates the effectiveness of our approach.\\n\\n> [W2-1] Why does Instance Influence consider the gradients of other samples? (Formula 1)\\n\\nFormula 1 offers a standard algorithm[1, 2] for computing the influence of an instance $s$ on another instance $s'$. This influence formulation measures how training on one instance $s$ impacts the model's loss on another instance $s'$, which is accomplished by calculating the dot product of the two instances' gradients. When the gradients of the two instances are on a similar direction, training on one instance would correspondingly assist the learning of the other instance. Therefore, we need to compute the gradients of other samples for influence estimation. Despite this, the instance influence we propose is not the influence between instances as described in formula 1, but rather **the influence of an instance on its corresponding task**. Therefore, we need to compute the influence of the target instance on all other samples within the same task, average them, and then obtain the instance-task influence, as shown in Formula 2. The result value is used as the instance influence for data selection.\\n\\n**Reference:**\\n\\n[1] Pruthi, Garima, et al. \\\"Estimating training data influence by tracing gradient descent.\\\" *Advances in Neural Information Processing Systems* 33 (2020): 19920-19930.\\n\\n[2] Park, Sung Min, et al. \\\"Trak: Attributing model behavior at scale.\\\" *arXiv preprint arXiv:2303.14186* (2023).\"}", "{\"title\": \"One thing to remind about the resivison\", \"comment\": \"One thing to remind: you should not use the final print format in the revision and show the authors' names.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Partially yes.\\nThe more interesting thing here is how we can better understand the data construction. For the unselected samples, are these samples non-beneficial at the beginning? Or if otherwise, why given some other samples presented, these samples are no longer beneficial? Are there methods to measure this influence and mitigate this problem, so that we can always construct a better dataset?\\nThese are the harder open questions; and for the data selection problem, are there some theoretical, even heuristics methods that can help us predict the performance saturation?\\nIf the authors have good thoughts on these questions, I am more than happy to have further discussion.\"}", "{\"summary\": \"This paper focuses on the issue of data redundancy in visual instruction tuning for building large vision language models (LVLMs). Through empirical studies, it reveals significant redundancy within visual instruction datasets and proposes a high-value data selection approach named TIVE. TIVE estimates instance influence scores and task difficulty scores based on gradient-based influence functions to select a representative subset of data, reducing training costs while achieving comparable performance to full-data fine-tuned models on multiple benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper addresses a crucial problem in the field of LVLMs data redundancy in visual instruction tuning. This is an important issue as it can lead to increased training costs and potential overfitting.\\n2. The authors conduct a series of experiments to demonstrate the existence of data redundancy and the effectiveness of their proposed method. The analysis includes pruning the amount of instructions from different tasks and evaluating the performance on various benchmarks, providing strong evidence for their claims.\\n3. The proposed TIVE method is innovative, considering both instance influence and task difficulty scores for data selection. This holistic approach takes into account the characteristics of different tasks and instances, making it more effective than traditional data selection methods.\", \"weaknesses\": \"1. Lack of in-depth analysis of task characteristics: Although the paper considers task difficulty in its method, it could provide a more in-depth analysis of the characteristics of different tasks and how they affect data redundancy and model performance.\\n2. Why does Instance Influence consider the gradients of other samples (Formula 1)? Is it for the normalized comparison of all samples in Formula 2?\\n3. A significant limitation of the current study is the lack of ablation experiments to evaluate the relative importance of instance selection versus task selection. The paper would be strengthened by comparing the proposed method against two baseline scenarios: one using only instance influence for global selection without task-level grouping, and another applying task selection first followed by random sampling or established methods like GraNd within each selected task. Such comparisons would help quantify the individual contributions of these two selection mechanisms and provide stronger justification for the proposed two-stage approach.\", \"questions\": \"Is there any example analysis to see how the samples that were filtered out compare to those that were retained?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank again for the time and effort that the reviewer has invested in evaluating our study. We hope that our response have resolved the raised concerns. We respectfully ask for a reconsideration of the score in light of these responses. Please let us know if you have any further feedback or concerns. We are more than willing to engage in further discussions to clarify any remaining issues.\"}", "{\"summary\": \"The paper investigates redundancy within visual instruction datasets used for fine-tuning large vision-language models (LVLMs). It presents an empirical analysis which reveals that reducing the instruction data does not significantly impact model performance, suggesting the potential for data reduction. To address this, the authors propose TIVE, a novel method that selects high-value data based on task difficulty and instance influence using gradient-based techniques. Experiments demonstrate that TIVE can achieve comparable or even superior results to full-data models while using only 15% of the dataset. The proposed method provides a more efficient approach to visual instruction tuning by minimizing training costs and redundancy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a well-justified and innovative method, TIVE, that addresses data redundancy in visual instruction datasets for LVLMs.\\n2. The motivation for addressing redundancy is well explained, and the proposed solution is logically developed based on detailed empirical findings.\\n3. The authors provide thorough empirical evidence demonstrating the existence of redundancy within current visual instruction datasets, supporting the motivation for their approach.\", \"weaknesses\": \"1. The paper does not sufficiently discuss the potential limitations of the TIVE approach, such as its scalability to even larger datasets or its applicability to different types of multimodal tasks.\\n2. I have some concerns regarding the data selection approach. In the earlier stages of machine learning, data and feature selection were widely popular. However, recent trends show that using larger models with bigger datasets tends to yield remarkable generalization capabilities. I hope the authors can address this concern in their rebuttal.\", \"questions\": \"Please refer to weakness part\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response to Reviewer g2yS (Part 2/2)\", \"comment\": \"> [W2-2] Is it for the normalized comparison of all samples in Formula 2?\\n\\n\\n$$\\nv^i_{s} = \\\\frac{1}{|D_i|}\\\\sum_{s' \\\\in D_i \\\\setminus s} \\\\frac{\\\\nabla l(s, \\\\theta) \\\\cdot \\\\nabla l(s', \\\\theta)}{|\\\\nabla l(s, \\\\theta)| |\\\\nabla l(s', \\\\theta)|}.\\n$$\\n\\n\\nWe present formula 2 above. For the numerator $\\\\nabla l(s, \\\\theta) \\\\cdot \\\\nabla l(s', \\\\theta)$ in the summation function, we compute the influence of instance $s$ on all other instances. We do this to estimate the influence of $s$ on its corresponding task, as described in our response of the previous question. As for the denominator $|\\\\nabla l(s, \\\\theta)| |\\\\nabla l(s', \\\\theta)|$, it is for the purpose of normalization. We do it to mitigate the potential impact of instances with excessively large gradients on the final computation of influence, thereby avoiding introducing large amount of noisy data.\\n\\n\\n\\n> [W3] A significant limitation of the current study is the lack of ablation experiments to evaluate the relative importance of instance selection versus task selection. The paper would be strengthened by comparing the proposed method against two baseline scenarios: one using only instance influence for global selection without task-level grouping, and another applying task selection first followed by random sampling or established methods like GraNd within each selected task. Such comparisons would help quantify the individual contributions of these two selection mechanisms and provide stronger justification for the proposed two-stage approach.\\n\\n| Benchmarks | Both | Only Task-level | Only Instance-level w/ ETG | Only Instance-level w/o ETG | Neither |\\n| ---------- | ---- | --------------- | -------------------------- | --------------------------- | ------- |\\n| SQA-I | 70.6 | 69.8 | 68.2 | 67.5 | 68.4 |\\n| MMB | 65.0 | 63.7 | 62.9 | 62.6 | 62.5 |\\n| SEED-I | 63.2 | 62.7 | 62.9 | 62.3 | 62.2 |\\n\\nThanks for your valuable feedback! We sincerely apologize for the unclear presentation, but we have indeed conducted the experiments in table 4 and present related analysis in the ablation studies. In the experiments, we evaluate the efficacy of task-level value by select data based on instance influence, while set the proportion of all tasks in the target data subset to a fixed value(all tasks are consider equal regardless of their difficulty). The efficacy of instance-level value is verifies by computing task weights based on task difficulty, but select instances within the task data randomly. In response to the reviewer's suggestions , we have supplemented our experiments by focusing solely on data selection at the instance level without considering task proportions. We present the results in the table above. In the table, ETG indicates \\\"equal task grouping\\\", meaning if we set a fixed proportion for equal task grouping, or just select data completely on instance-level. The above results indicate that both task-based and instance-based selection contribute to improvement in the final model's performance. However, data selection based on task difficulty only contributes more substantially to the reduction of redundancy compared to selection based on instance influence only. This suggests that in the current visual instruction data, redundancy between tasks is more pronounced than between instances, hence reducing redundancy at the task level proves to be more effective.\\n\\nWe would like to thank the reviewer again for your insightful comments and constructive criticism. Your feedback has greatly contributed to improving the quality of our work. We hope that our responses have adequately addressed your concerns. We kindly request you to consider raising the score, and we welcome any further feedback or concerns that you might have. We are more than willing to engage in further discussions to clarify any remaining issues.\"}", "{\"comment\": \"Dear Reviewer g2yS,\\n\\nThank you for your time and effort in reviewing our paper. We have addressed your comments and want to kindly remind you that the rebuttal period is ending soon. We look forward to discussing our responses and any other aspects of our research with you. Please let us know if you have any concerns.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thanks to the authors for the careful revision and follow-ups. The initial 6 points are to encourage meaningful improvement; if the cost-performance trade-off is not solved, this work will fall below the threshold in the final review. Currently, the quality of this work has improved substantially and is above the acceptance threshold, and I hold a positive view. I will increase my confidence score to 5 and suggest considering acceptance, while I may not raise the score further, according to the work quality distribution of ICLR.\\n\\n(For follow-up work, if the best amount of data for instruction finetuning could be further studied, not only empirically but also insightfully why it is that ratio, I would rate that kind of work with 8 or even 10. But that is a harder problem.)\"}", "{\"comment\": \"Dear Reviewer WdBE,\\n\\nThank you for the time and effort you generously invest reviewing our manuscript. We've tried to carefully address your concerns in our response. We hope that our detailed response, the supplementary experiments, and the revised version of our manuscript can successfully address your concern.\\n\\nAs the discussion phase is drawing to a close, we would be appreciative if you could spare some time to go over our response. If our responses have successfully addressed your concerns, would you might consider reevaluating your initial assessment and possibly adjusting the score? If any unresolved issues still exist, we are fully prepared to tackle them.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Glad to see your thoughts on these questions. For the next version of your paper, I would suggest including some of these insights in the discussion. And wish you good luck!\"}", "{\"comment\": \"Thank you for your positive feedback and for providing us with valuable suggestions to further improve our work. We have carefully considered your feedbacks and incorporated the revisions in the updated manuscript. We hope that these revisions can address your concerns and strengthen the paper. We are very grateful for your constructive suggestions and your positive view of our work.\\n\\nIf you have any further feedback or additional concerns, we are happy to address them. Thank you again for your time and consideration.\"}", "{\"title\": \"Official Comment by Reviewer WdBE\", \"comment\": \"I appreciate the authors' efforts in improving the manuscript, particularly in the design of efficiency. They have addressed most of my concerns, so I will raise my score. However, I cannot give a higher score because the authors have not demonstrated significant improvements in motivations and methods compared to previous approaches in data selection.\"}", "{\"title\": \"Official Response to Reviewer i5DE (Part 2/2)\", \"comment\": \"> [W2] I have some concerns regarding the data selection approach. In the earlier stages of machine learning, data and feature selection were widely popular. However, recent trends show that using larger models with bigger datasets tends to yield remarkable generalization capabilities. I hope the authors can address this concern in their rebuttal.\\n\\nThis is a interesting question. Existing large models, particularly large language models (LLMs), mostly have two training stages. In the first stage, the model is pre-trained on a vast amount of unsupervised data. In the second stage, the model is fine-tuned on supervised data. It is true that during the pre-training stage, using larger models with bigger datasets tends to yield remarkable generalization capabilities. This phenomenon is often referred to as the scaling law[1], where the model's generalization performance is continually improved by increasing the quantity of model parameters and training data. However, our research primarily focuses on the model's second training stage, namely supervised fine-tuning (with a specific focus on instruction tuning in our study). This stage is primarily aimed at aligning LLMs with human intent, that is, learning to follow human instruction and produce a helpful response. In this instruction tuning stage, we will not introduce new knowledge or capabilities other than human intention alignment into the model. Therefore, the scaling law is not universally applicable.\\n\\nResearch on instruction tuning of LLMs has found that redundancy exists in language instruction data [2, 3, 4, 5]. A small amount of high-quality data can enable the model to achieve excellent alignment performance, while introducing excessive amount of data might potentially lead to overfitting[6], even impair the LLM's original generalization capabilities. For visual instruction tuning, the language model needs to learn not only human instruction following, but also visual understanding. To achieve these two targets, existing visual instruction data often combines traditional visual data (captions, VQA, grounding) with synthesized visual instruction following data [7, 8, 9, 10], which leads to increased redundancy due to this straightforward combination method.\\n\\nIn our study, we firstly demonstrate that redundancy in visual instruction data is indeed significant through an empirical study. Subsequently, we propose a method TIVE for estimating data value based on task difficulty and instance influence. This method particularly considers the potential causes of redundancy that might exist in visual instruction data derived from multiple different sources, and performs data selection based on this estimated data value. Ultimately, our approach is proven to be effective across different visual instruction sets and various models. Therefore, we assert that it is necessary to eliminate redundancy and select high-quality data in the context of visual instruction tuning. The method we propose, which effectively eliminates redundancy in visual instruction data, also makes a significant contribution.\\n\\nWe thank the reviewer again for your valuable feedback. We hope that our responses have sufficiently addressed your concerns. We kindly request that you consider revising the score based on these clarifications. We welcome any additional feedback or concerns you may have and are more than willing to engage in further discussions to clarify any remaining issues.\\n\\n**Reference:**\\n\\n[1] Kaplan, Jared, et al. \\\"Scaling laws for neural language models.\\\" *arXiv preprint arXiv:2001.08361* (2020).\\n\\n[2] Zhou, Chunting, et al. \\\"Lima: Less is more for alignment.\\\" *Advances in Neural Information Processing Systems* 36 (2024).\\n\\n[3] Liu, Wei, et al. \\\"What makes good data for alignment? a comprehensive study of automatic data selection in instruction tuning.\\\" *arXiv preprint arXiv:2312.15685* (2023).\\n\\n[4] Xia, Mengzhou, et al. \\\"Less: Selecting influential data for targeted instruction tuning.\\\" *arXiv preprint arXiv:2402.04333* (2024).\\n\\n[5] Li, Ming, et al. \\\"From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning.\\\" *arXiv preprint arXiv:2308.12032* (2023).\\n\\n[6] Shi, Zhengyan, et al. \\\"Instruction Tuning With Loss Over Instructions.\\\" *arXiv preprint arXiv:2405.14394* (2024).\\n\\n[7] Liu, Haotian, et al. \\\"Visual instruction tuning.\\\" *Advances in neural information processing systems* 36 (2024).\\n\\n[8] Liu, Haotian, et al. \\\"Improved baselines with visual instruction tuning.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\\n\\n[9] Zhu, Deyao, et al. \\\"Minigpt-4: Enhancing vision-language understanding with advanced large language models.\\\" *arXiv preprint arXiv:2304.10592* (2023).\\n\\n[10] Zhao, Bo, et al. \\\"Svit: Scaling up visual instruction tuning.\\\" *arXiv preprint arXiv:2307.04087* (2023).\"}", "{\"comment\": \"Dear Reviewer i5DE,\\n\\nThank you for your time and effort in reviewing our paper. We have addressed your comments and want to kindly remind you that the rebuttal period is ending soon. We look forward to discussing our responses and any other aspects of our research with you. Please let us know if you have any concerns.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer g2yS,\\n\\nThank you for your time and effort in reviewing our paper. We have addressed your comments and want to kindly remind you that the rebuttal period is ending soon. We look forward to discussing our responses and any other aspects of our research with you. Please let us know if you have any concerns.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"General Response\", \"comment\": \"We would like to express our gratitude once again to the reviewers for their insightful feedback and suggestions. We have uploaded a revised version of our paper, in which the modifications are highlighted in blue for your convenience.. The main modifications are:\\n\\n1. In Section 3, we add the design details of TIVE-efficent (WdBE, P1cf). We also provide a more precise definition of the hyperparameter $\\\\lambda$ in advance (P1cf) and the efficient linear implementation for computing instance influence (P1cf). \\n2. In Section 4, we provide the experimental results of TIVE-efficient (WdBE, P1cf) and the additional baseline results of MoSo and LESS in the main table (WdBE). \\n3. In Section 4, we re-present our ablation study on relative importance of instance-level selection versus task-level selection. Additionally, we add the results of TIVE using only Instance influence without any task grouping (g2yS).\\n4. In appendix A, we provide the detail time cost of TIVE and TIVE-efficient compared to full-data training (WdBE, P1cf).\\n5. In appendix E, we update the two-pass linear algorithm for computing instance-task influence in Algorithm 1 (P1cf).\\n6. In appendix F, We first present the task proportion of data subset selected by TIVE and conduct a thorough analysis (g2yS). Subsequently, we demonstrate the detail results of how the model's performance change on various downstream tasks at a higher sampling rate (WdBE, P1cf). Finally, We incorporate experiments evaluating the transferability of the data subsets selected by TIVE across different models.\\n7. In appendix G, we incorporate a discussion on the limitations of our approach (i5DE).\\n\\nWe sincerely thank the reviewers for your effort in reviewing this paper and hope this revised version can better address your concerns.\"}", "{\"title\": \"Official Response to Reviewer P1cf (Part 1/2)\", \"comment\": \"We deeply appreciate the reviewer's positive review and the insightful comments. We will clarify the raised concerns in the subsequent sections.\\n\\n> [Q1] In algorithm 1, the task influence is calculated in a nested for loop, with a overall complexity for each task. A question is, could the author first use one pass to aggregate the average of the normalized gradients and then use another pass to calculate the score? This will reduce the complexity to linear. Will this cause numerical instability or it doesn't? Originally, was the gradients stored or re-computed?\\n\\nThanks for your feedback on approach details. The answer to the first question is yes. In fact, the influence computation based on a double-loop is equivalent to calculating the average gradient through one pass first, then computing the influence, and this does not result in numerical instability. As for the third question, all gradients are pre-computed and stored through one pass. We use LoRA training and random projection to reduce the dimensionality of these gradients, thus the subsequent storage cost of the gradient features and the computation cost of gradient-based influence are both much lower compared to the computation of gradients.\\n\\n\\n---\\n> [Q2] Are the influence scores' gradient of a sample computed over all tokens in it and do average, or only on outputs part?\\n\\nThis is a interesting question. We follow the mainstream visual instruction tuning approaches [1, 2, 3] and only compute gradients on the output tokens. \\n\\n---\\n\\n> [Q3] In the line 301, $\\\\lambda$ is introduced as \\\"We use a hyperparameter to control the temperature of the weight distribution\\\". However, how actually it is used is presented in Line 871 in appendix. The ablation of $\\\\lambda$ appears before readers know how actually it is used. The ordering of this part needs further consideration.\\n\\nWe greatly appreciate your suggestions regarding the presentation in our manuscript. We sincerely apologize for not providing a proper explanation of the term $\\\\lambda$ in the methodology section. We will ensure to rectify this in the revised version.\\n\\n\\n---\\n> [W3-1] As TIVE needs the reference model trained with warmup data, the selection of TIVE is generally model-specific.\\n\\n| Method | MME-P | MMBench | SEED-I | SQA-I | Avg. |\\n| ------------------------- | ------ | ------- | ------ | ----- | ---- |\\n| Random | 1456.6 | 64.9 | 63.4 | 69.4 | 67.6 |\\n| Length | 1445.4 | 62.8 | 63.2 | 69.8 | 67.0 |\\n| TIVE from LLaVA-Vicuna-7B | 1498.2 | 66.1 | 64.7 | 72.0 | 69.4 |\\n| TIVE from LLaVA-Phi-3-4B | 1488.4 | 64.8 | 64.0 | 71.4 | 68.7 |\\n| TIVE from LLaVA-LLaMA3-8B | 1503.3 | 65.6 | 63.6 | 71.8 | 69.0 |\\n| TIVE | 1502.9 | 66.1 | 65.6 | 72.2 | 69.8 |\\n\\nIn our experiments, we indeed need to train a unique reference model for each individual model. The reference model should have the same backbone as the target model, so the selection of TIVE is model-specific. However, upon further analysis, we discover that the selected data subset is actually transferrable. For instance, the data selected based on the LLaVA-Vicuna-7B reference model is actually applicable to LLaVA-Vicuna-13B or LLaVA-LLaMA3-8B as well. We present the results in the table above, where we select data subsets through a series of smaller models and conduct training on a larger model, LLaVA-Vicuna-13B. The results indicate that these data subsets are actually transferrable. Although they perform slightly worse than the data subsets selected based on the same model(LLaVA-Vicuna-13B), they still significantly outperform other baseline methods. This suggests that TIVE is not entirely model-specific. When computational resources are extremely limited, selecting data on a smaller model and transferring it to other larger models is an efficient alternative.\\n\\n\\n**Reference:**\\n\\n[1] Liu, Haotian, et al. \\\"Visual instruction tuning.\\\" *Advances in neural information processing systems* 36 (2024).\\n\\n[2] Liu, Haotian, et al. \\\"Improved baselines with visual instruction tuning.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\\n\\n[3] Zhu, Deyao, et al. \\\"Minigpt-4: Enhancing vision-language understanding with advanced large language models.\\\" *arXiv preprint arXiv:2304.10592* (2023).\"}", "{\"comment\": \"Dear Reviewer i5DE,\\n\\nThank you for the time and effort you generously invest reviewing our manuscript. We've tried to carefully address your concerns in our response. We hope that our detailed response, the supplementary experiments, and the revised version of our manuscript can successfully address your concern.\\n\\nAs the discussion phase is drawing to a close, we would be appreciative if you could spare some time to go over our response. If our responses have successfully addressed your concerns, would you might consider reevaluating your initial assessment and possibly adjusting the score? If any unresolved issues still exist, we are fully prepared to tackle them.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Official Response to Reviewer P1cf (Part 2/2)\", \"comment\": \"> [W3-2] TIVE needs to compute the LoRA gradient over all samples in the pool, then this cost is close to training on all of the data with LoRA. From this perspective, this method may fail to reduce the overall training costs.\\n\\nThanks for your insightful suggestion. In our original TIVE implementation, we do have to compute the LoRA gradients over all samples and the overall cost of TIVE is slightly higher than full-data fine-tuning. To address this, we propose an efficient implementation of TIVE, TIVE-efficient, where we only randomly select 10% of samples and compute their gradients. We obtain the task difficulty based on these gradients and compute their instance influence. After that, we train a smaller model, LLaVA-Qwen-1.5-1.8B, for predicting instance influence. In this way, we have slightly compromise the estimation precision for task difficulty and instance influence, but significantly reduce TIVE's time cost. we present the time cost and evaluation results of TIVE-efficient in the below table. As we can observe, TIVE-efficient substantially reduce the time cost of TIVE, without any significant compromise in performance.\\n\\n| | Total Time Cost |\\n| -------------- | --------------- |\\n| TIVE | ~ 12.6h |\\n| TIVE-efficient | ~ 4.6h |\\n| Full-data SFT | ~ 11.5h |\\n\\n| Method | MME-P | MMBench | SEED-I | SQA-I | Avg. |\\n| ------------------ | ---------- | -------- | -------- | -------- | -------- |\\n| Random | 1386.5 | 61.8 | 61.9 | 68.4 | 65.4 |\\n| Length | 1413.0 | 59.3 | 61.2 | 69.2 | 65.1 |\\n| Perplexity | 1393.3 | 62.3 | 61.3 | 67.9 | 65.3 |\\n| GraNd | 1400.5 | 62.9 | 62.3 | 68.4 | 65.9 |\\n| EL2N | 1356.5 | 61.6 | 61.9 | 66.2 | 64.5 |\\n| **TIVE** | **1433.0** | **65.0** | **63.2** | **70.6** | **67.6** |\\n| **TIVE-efficient** | **1424.9** | **64.3** | **62.5** | **70.8** | **67.2** |\\n\\n\\n---\\n> [W3-3] Tuning the hyper-parameters of TIVE would give another dimension of complexity if there are no default hyper-parameters.\\n\\nYes, finding the optimal hyper-parameters can introduce additional time cost. Therefore, we present the impact of these hyperparameters on model performance in our ablation study. We hope that these results can assist the selection of optimal hyperparameters, thereby reducing the associated time cost.\\n\\n\\n---\\n> [W3-4] From this perspective, this method may fail to reduce the overall training costs. If so, it needs to target improving the final performance (without insisting on 15% of data) and discuss more about how to achieve this (what proportion of data is the best?). If not, the corresponding additional cost should be discussed.\\n\\n| Sampling rate | MME-P | MMBench | SEED-I | SQA-I | Avg. |\\n| -------------- | ------ | ------- | ------ | ----- | ---- |\\n| 15% | 1433.0 | 65.0 | 63.2 | 70.6 | 67.6 |\\n| 30% | 1477.2 | 66.5 | 64.6 | 70.8 | 68.9 |\\n| 50% | 1506.1 | 66.7 | 66.2 | 69.6 | 69.3 |\\n| 100%(baseline) | 1510.7 | 64.3 | 66.1 | 66.8 | 68.2 |\\n\\nFirstly, we have discussed the additional cost and the way to reduct cost, see our response above. Secondly, without insisting on 15% sampling rate, our models **can be further enhanced**. In the above table, we present the results of TIVE at various sampling rates, including 15%, 30%, 50%, and 100%. At a sampling rate of 50%, TIVE exhibits the most superior average task performance, achieving comparable or better performance relative to the full-data baseline across nearly all benchmarks. Furthermore, we discover that the performance of the model on different downstream benchmarks varies with increasing sampling rates. We observe a consistent performance improvement on MME-P and SEED-I, while on MMBench and SQA-I, the model's performance exhibits a trend of initial increase followed by a decline. We posit that this phenomenon is attributable to the characteristics of the downstream tasks. For tasks that demand more on visual perception (such as MME-P and SEED-I ), the benefits of improved visual perception capability from increased data size outweigh the negative impact of redundancy. However, for tasks that demand more on inference (such as MMBench and SQA-I), a small amount of data can help the model learn basic inference patterns in visual scenarios while the risk of potential overfitting caused by increased data size may interfere with its inference process, causing a significant negative impact. From the perspective of average performance across all tasks, a sampling rate of around 50% appears to be ideal. However, in practical scenarios, the optimal choice of sampling rate needs to consider the specific task type, as well as the trade-off between performance and time cost.\"}", "{\"title\": \"Official Response to Reviewer i5DE (Part 1/2)\", \"comment\": \"We sincerely thank the reviewer for their time and the valuable feedback provided. We will endeavor to address all the raised concerns in the subsequent sections.\\n\\n> [W1] The paper does not sufficiently discuss the potential limitations of the TIVE approach, such as its scalability to even larger datasets or its applicability to different types of multimodal tasks.\\n\\nWe appreciate your insightful suggestions regarding the limitations of our methodology. We'll discuss the limitation as followed:\\n\\n**TIVE's scalability to even larger datasets**\\n\\nOur experiments on three datasets have demonstrated the effectiveness of our approach on datasets ranging from 0.5M to 2M. Ideally, our method can be adapted to any visual instruction dataset. However, practically, the computational cost of gradient calculations for larger instruction datasets (reaching the scale of 10M or even 100M) is substantial, necessitating a more efficient implementation to make the application of TIVE on these instruction datasets feasible. Simultaneously, when our target instruction dataset encompasses a vast number of tasks (over 1000), and the differences between each task are not particularly distinct, the effectiveness of our approach warrants further exploration.\\n\\n**TIVE's applicability to different types of multimodal tasks**\\n\\nIn the era of LLMs (Large Language Models), it is critical for models to exhibit strong generalization abilities, meaning that after training on a specific instruction dataset, they should be able to generalize to a wide variety of distinct tasks. This is also true to LVLMs(Large Vision-Language Models). Hence, our method is independent of the downstream tasks. Moreover, the applicability of TIVE that we discuss here does not conform to the traditional setting where training and testing data fall under the same domain. In our experiments, we ensure that there is no overlap between the instruction data used for training and the downstream tasks. Despite this, we acknowledge the insufficiency in our understanding TIVE's impact on the model's generalized performance across different tasks. Intuitively, the influence of data redundancy on the learning of various model capabilities is different. For highly visual-related abilities, such as entity recognition or OCR, the learning of these abilities necessitates a large amount of data. In this case, pruning the original instruction dataset through TIVE could easily lead to a decline in these specific abilities. Conversely, for some reasoning tasks, as these abilities primarily derive from LLMs, they do not require a significant amount of data for learning. Therefore, the removal of redundancy has little to no impact on these abilities, and may even yield improvements. Our method eliminates redundancy from a comprehensive perspective, but overlooks how to establish an optimal parameter (such as sampling rate, temperature) for dataset selection in specific scenarios, with the aim of retaining the most effective specific capabilities.\\n\\n**Other potential limitation**\\n\\nFirstly, theoretically speaking, our method can be applied to any training scenario, not just limited to visual instruction tuning. However, we have not sufficiently discussed these types of scenarios in our paper. Secondly, we address the redundancy issue in a large dataset composed of a vast amount of highly diverse instruction data combinations. but our approach necessitates the use of existing task labels to categorize these different data points. When these labels are either non-existent or inaccurate, we may need to manually categorize these different data points using methods such as clustering. In such circumstances, whether TIVE remains effective worths further exploration. Thirdly, given the requirement for gradient computation, TIVE entails a relatively high time cost. Although we can optimize the calculation of task difficulty and instance influence with existing techiniques, these strategies may introduce a certain degree of estimation deviation. While these losses are relatively minor, there is still room for better implementation.\"}", "{\"comment\": \"Dear Reviewer P1cf,\\n\\nThank you for the time and effort you generously invest reviewing our manuscript. We've tried to carefully address your concerns in our response. We hope that our detailed response, the supplementary experiments, and the revised version of our manuscript can successfully address your concern.\\n\\nAs the discussion phase is drawing to a close, we would be appreciative if you could spare some time to go over our response. If our responses have successfully addressed your concerns, would you might consider reevaluating your initial assessment and possibly adjusting the score? If any unresolved issues still exist, we are fully prepared to tackle them.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thank you for your reminder. We have updated the revision to the correct format.\"}", "{\"comment\": \"Dear Reviewer g2yS,\\n\\nThank you for the time and effort you generously invest reviewing our manuscript. We've tried to carefully address your concerns in our response. We hope that our detailed response, the supplementary experiments, and the revised version of our manuscript can successfully address your concern.\\n\\nAs the discussion phase is drawing to a close, we would be appreciative if you could spare some time to go over our response. If our responses have successfully addressed your concerns, would you might consider reevaluating your initial assessment and possibly adjusting the score? If any unresolved issues still exist, we are fully prepared to tackle them.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Official Response to Reviewer WdBE (Part 4/4)\", \"comment\": \"> [Q3] The authors could explain the design of gradient inference in detail and show the relations with the original training data and the backbone.\\n\\nYes. TIVE takes as a Language-Visual Language Model (LVLM) and a visual instruction dataset $\\\\mathcal{D}$ as input, with the aim of selecting a high-quality subset $\\\\mathcal{D_T}$ from the original dataset $\\\\mathcal{D}$ without compromising performance. Firstly, we sample a small fraction of data from the original instruction data (utilizing the entire dataset is feasible, but it incurs additional computational cost). Then, this subset of data is used to train a reference model via LoRA[1], which we refer to as warm-up training. LoRA introduces trainable light-weight modules within each layer of the model while freezing the parameters of other components. We incorporate LoRA into the LLM counterpart of the LVLM, as it accounts for the largest proportion of parameters. The purpose of warm-up training is to enable the reference model to be warm-up to learn the visual instruction following capability, and will not be overfitted to the distribution of the whole visual instruction dataset. After training the reference model, we compute gradients for the original visual instruction dataset. Specifically, we compute the gradient for each sample on each LoRA module, then concatenate the gradients from all LoRA modules. After that, we reduce the dimensionality of gradient features via random projection. We can easily pre-compute and store the gradient features since they have low dimension. Subsequently, based on these precomputed gradients, we calculate task difficulty and instance influence, and use these two estimates for data selection.\\n\\nWe would like to express our sincere gratitude once again for the insightful comments and suggestions provided by the reviewers. We hope that our responses have adequately addressed your concerns. We kindly request you to consider raising the score, and we welcome any further feedback or concerns that you might have. We are more than willing to engage in further discussions to clarify any remaining issues.\\n\\n**Reference:**\\n\\n[1] Hu, Edward J., et al. \\\"Lora: Low-rank adaptation of large language models.\\\" *arXiv preprint arXiv:2106.09685* (2021).\"}", "{\"summary\": \"The paper concentrates on reducing the data redundancy of instruction-following MLLMs. The authors show that pruning a certain ratio of specific training data has a slight influence on the overall accuracy. Based on the observation, the authors present a data selection approach named TIVE. The data selection strategy is based on estimating task difficulty and instance influence. Then the gradient features are used for selection. The authors integrate the approach on several MLLM backbones including LLaVA-1.5, LLaVA-LLaMA3, Mini-Gemini, etc. Experiments on multimodal benchmarks show an increase compared with random selection.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The experiments are complete across various MLLM backbones, including Vicuna, Phi, and LLaMA3, and architectures, including LLaVA-1.5, SVIT-Mix, and Mini-Gemini. The authors also show comparisons with baselines / advanced MLLMs.\\n2. The performance meets the full baselines with only 10% to 30% training data, which shows the effectiveness of TIVE.\\n3. The paper is well written and the formulation periods are clear.\", \"weaknesses\": \"1. The main weakness lies in the design of the approach, especially regarding the computation costs. In my recognition, the inference operation based on the gradients and other selection operations are costly, even meets the original training cost. This makes the contribution of the pruning method weak.\\n2. The selection based on gradients is a posterior probability, which means choosing the hard samples as prior knowledge. This may be unfair for the comparisons against baselines.\\n3. The overall performance in Table 1 against backbone models is weak, only shows significant improvement on the SciQA benchmark, and gains accuracy drop or fair on other benchmarks (may be due to experiment uncertainty). This may mean the selection approach is sub-optimal.\", \"questions\": \"See the weakness part. The authors are encouraged to answer such questions.\\n1. Regarding weakness 1, the authors are encouraged to provide the actual time cost for TIVE and fair comparisons with full training for LLaVA-1.5.\\n2. The accuracy drop for TIVE is significant compared with the baseline. \\n3. The authors could explain the design of gradient inference in detail and show the relations with the original training data and the backbone.\\nTherefore, I recommend rejecting this manuscript in its current version. I would like to increase my score if the authors could address the issues above or provide more results against approaches with similar targets.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your suggestions and positive recognition of our work. We will continue to seek the most optimal data selection method and further optimize our paper.\"}", "{\"comment\": \"Dear Reviewer g2yS,\\n\\nThanks for your valuable time and hard work in reviewing our paper. We've made our best effort to respond to your comments. As we're nearing the end of the discussion stage, we are writing to kindly remind you that the rebuttal period is coming to a close. We're looking forward to continuing the conversation about our responses and any other parts of our research with you. Please let us know if you have any concerns.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"metareview\": \"### **Summary**\\nThis paper introduces **TIVE**, a method designed to reduce redundancy in visual instruction datasets by leveraging **task difficulty** and **instance influence** scores calculated via gradient-based techniques. The approach is evaluated on multiple vision-language models (LVLMs) and benchmarks, claiming comparable or superior performance to full-data fine-tuning with only 15% of the dataset.\\n\\n### **Strengths**\\n1. **Problem Relevance**:\\n - The issue of data redundancy in visual instruction datasets is a timely and important topic in scaling LVLMs.\\n2. **Empirical Validation**:\\n - The authors provide extensive experimental results across various LVLM architectures and benchmarks.\\n3. **Efficiency Improvements**:\\n - The introduction of **TIVE-efficient** shows an attempt to reduce computational costs, which is a positive step toward scalability.\\n\\n### **Weaknesses**\\n1. **Insufficient Novelty and Impact**:\\n - The approach, while novel in combining task- and instance-level selection, does not offer significant improvements over baseline methods.\\n - In some benchmarks, performance gains are marginal or nonexistent, raising concerns about the practical utility of such aggressive data pruning.\\n\\n2. **Scalability and Practicality**:\\n - The computational overhead of TIVE remains high, especially for larger datasets, even with TIVE-efficient. The applicability to datasets larger than those tested (e.g., >10M samples) is unclear and largely speculative.\\n - The reliance on gradient-based influence functions makes the approach resource-intensive, potentially negating the claimed efficiency benefits in realistic scenarios.\\n\\n3. **Limited Task Insights**:\\n - Although the method incorporates task difficulty, the paper does not provide a deep understanding of how task characteristics influence redundancy or the success of TIVE.\\n - The empirical analysis lacks qualitative insights into the differences between selected and filtered data, leaving questions about the robustness of the selection criteria.\\n\\n4. **Methodological Concerns**:\\n - The proposed method relies heavily on prior knowledge of task labels and task-specific gradients, which may not generalize to more diverse or unlabeled datasets.\\n - Comparisons to baselines are not entirely convincing, as the method\\u2019s advantage appears to depend on specific configurations and datasets.\\n\\n5. **Presentation Issues**:\\n - Key concepts (e.g., gradient-based influence, task difficulty) are not clearly explained, making the methodology difficult to follow.\\n - The ablation studies, while improved during the rebuttal, do not provide enough evidence to justify the two-stage approach over simpler alternatives.\\n\\n### **Recommendation**\\nThe proposed solution lacks sufficient novelty, scalability, and practical impact to justify acceptance. The marginal improvements and methodological limitations do not demonstrate a compelling advancement over existing methods. Addressing these issues in future work, particularly by improving scalability and providing deeper insights into task-specific effects, would strengthen the paper\\u2019s contribution.\", \"additional_comments_on_reviewer_discussion\": [\"The reviewers raised several concerns, many of which were not fully resolved during the rebuttal period:\", \"**Scalability**: While TIVE-efficient attempts to address computational concerns, the method\\u2019s cost-effectiveness compared to full-data fine-tuning remains questionable.\", \"**Task Insights**: The authors acknowledged gaps in understanding task-specific effects but did not provide sufficient new insights or experiments to address this weakness.\", \"**Performance**: Marginal improvements on some benchmarks fail to justify the complexity and computational demands of the method.\", \"**Generalizability**: The approach is highly dependent on task labels and specific dataset characteristics, limiting its broader applicability.\", \"One reviewer noted that data selection methods like TIVE may not align with current trends in leveraging larger models and datasets for better generalization. While the authors defended their approach as applicable to the instruction tuning phase, this distinction was not clearly demonstrated or validated.\"]}" ] }
ByLO7p0oCF
DebUnc: Improving Large Language Model Agent Communication Via Uncertainty Metrics
[ "Luke Yoffe", "Alfonso Amayuelas", "William Yang Wang" ]
To enhance Large Language Model (LLM) capabilities, multi-agent debates have been introduced, where multiple LLMs discuss solutions to a problem over several rounds of debate. However, LLMs often produce incorrect responses that appear confident, which can mislead other agents. This is partly because agents do not express their confidence levels during standard debates. To address this, we introduce DebUnc, a multi-agent debate framework that uses uncertainty metrics to assess agent confidence levels. We adapted the LLM attention mechanism to adjust token weights based on confidence levels and also explored using textual prompts to convey confidence. Our evaluations across various benchmarks show that attention-based methods are particularly effective, and that as uncertainty metrics improve, performance will continue to increase.
[ "multiagent debate", "model uncertainty", "agent communication", "large language models" ]
https://openreview.net/pdf?id=ByLO7p0oCF
https://openreview.net/forum?id=ByLO7p0oCF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x9PVKVvSq7", "w2KZUsTsgC", "sjSwidiqeF", "3BCygx7Mja" ], "note_type": [ "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730432207589, 1730686170059, 1731851882216, 1729222297741 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13166/Reviewer_EZeH" ], [ "ICLR.cc/2025/Conference/Submission13166/Reviewer_XH6i" ], [ "ICLR.cc/2025/Conference/Submission13166/Authors" ], [ "ICLR.cc/2025/Conference/Submission13166/Reviewer_s5TV" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces DebUnc, a framework that improves multi-agent LLM debates by incorporating uncertainty metrics to address overconfident incorrect responses. The framework communicates uncertainty either through text prompts or by adjusting the LLM's attention mechanism. Experiments across multiple benchmarks show that the attention-based approach consistently outperforms standard debates.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents its core idea in a clear and straightforward manner. The proposed solution of incorporating uncertainty metrics into multi-agent debates is simple.\\n\\n2. The figures effectively communicate the key concepts.\", \"weaknesses\": \"1. The experimental comparisons are insufficient. The authors should have included basic baselines like Chain-of-Thought (CoT) and compared their method with prior work on multi-agent debates [1,2]. A simple baseline of having agents directly generate uncertainty in their responses is also missing. More importantly, since multi-agent debate is quite similar to self-consistency (both generate multiple answers), they should compare with CoT self-consistency using similar computation budgets. They could have also tried applying uncertainty metrics directly to self-consistency, which might be simpler than their proposed approach.\\n\\n2. In the \\\"Attention Scaling\\\" section, many key notations ($w_i$, $m_j$, $f_i$) are just thrown in without proper definition. Some implementation choices, like only applying attention scaling to the previous round's responses, aren't explained or validated through ablation studies.\\n\\n3. Several important implementation details are inadequately explained: the decision to \\\"only apply attention scaling to the responses from the previous round\\\" lacks justification, and no ablation studies to validate such design choices.\\n\\n4. I'm also confused by Figure 4. How did authors get different data points for each method? Run with different random seeds? Also, the trend looks weak if we ignore the oracle metric (which I think we should, since it's completely impractical). And it's concerning that for \\\"Attention-Others\\\", the accuracy actually drops when AUROC increases to around 0.7.\\n\\n5. The improvements are marginal and unconvincing. With proposed uncertainty metrics, the authors only get 1-3% improvement, and less than 1% for Llama 3 8B. Given how much more complex and computationally expensive their method is, these gains are hard to justify.\\n\\nOverall, while the paper presents an interesting direction, the lack of comprehensive comparisons, unclear technical details, and marginal improvements make it difficult to assess the true value of the contribution. \\n\\n[1] Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate.\\n\\n[2] Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. Encouraging divergent thinking in large language models through multi-agent debate.\", \"questions\": \"See above weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper attempts to augment mechanisms for collaborative debate by getting debaters to state their reported uncertainty about a question. The paper explores three different methods for reporting uncertainties and then three different methods for aggregating this uncertainties into the judge (how to use the reported uncertainties).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper introduces an interesting novel mechanism for interpreting reported uncertainties from other models.\\nThe paper is well written and structured well.\", \"weaknesses\": \"1) I\\u2019m not sure any of the results in Table 1 or Table 2 are statistically significant. On dataset sizes of 100 data points and without reported uncertainty, if I naively calculate the Standard Error of Mean (assuming results are binomial distributed (0 or 1)), then all results have overlapping confidence intervals. I\\u2019d advise running with more data or, if constrained, k-fold validation. Furthermore as generations are stochastic (at Temp=1) it would be good to run repeats anyway to clarify your reported results are good estimators of performance.\\n\\n\\n2) I\\u2019m unsure if the attention-masked method is really suitable. You either live in the land where the model is a black box (such as an API), or the model is a white box (and you can alter the weights and the attention mask). If you propose methods in the second approach, then surely, under minimal training, judges will pick up on the debater's uncertainties. \\n\\n\\n3) I think key components of the debate literature are missing: \\nIrving et al (AI safety via Debate)\\nKhan et al (Debating with more perusasive LLMs leads to more truthful outcomes)\\n\\n\\n4) I think the use of the oracle baseline is misleading here - in the situation where you have a perfect verifier in any of the debaters - we\\u2019d expect performance to be really high. This should be the upperbound for all methods, but is not an extrapolation of what models can achieve.\\n\\n\\n5) Some wording is interesting is concerning that a finding from the paper in section 5.2 is \\u201cThe best-performing uncertainty metric was the Oracle metric.\\u201d - this is surely true by construction?\", \"questions\": \"1) How is the relevance score for TokenSAR generated?\\n\\n2) I think using the Oracle should actually be used as an upper bound for performance of each aggregation (or uncertainty combination) method. I think this would change table 1 results and suggest aggregating with \\n\\n3) In figure 4, the x-axis is the AUROC for the uncertainty metric - can you clarify which reported uncertainty this (is it the final judge). Also plotting uncertainty here (on the y axis would be useful). I also find it worrying that without oracle, all these trends are much weaker.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"One problem in multi-agent communication is that the uncertainty of utterances is not well captured. This paper directly compensate this by proposing an improvement of the vanilla communication scheme---predicting uncertainty from the answer and put that in the utterance as well. Similar to majority voting but with weights, at a higher level.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed technique is well motivated.\"], \"weaknesses\": [\"How sure are we that the language model won't express their uncertainties via natural language if we prompt them well enough? I'm expecting this capability should be attainable with few-shot prompts.\", \"A big confounder is that by tuning the hyperparameters of the uncertainty metrics, we actually find a predictor of the correctness. And this alone (instead of communication) is the real drive behind improved scores. A n ablation is needed for a simple weighted majority vote.\", \"As mentioned in L141, Pham et al. and Chen et al. are other improvements made on Du et al. How does the proposed technique compare to them? (just asking, these can be argued to be contemporaneous)\"], \"questions\": [\"In Table 1 and 2, there is a risk that oracle could be interpreted as one of the proposed methods. It should be visually more separated.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
ByCV9xWfNK
Intermediate Layer Classifiers for OOD generalization
[ "Arnas Uselis", "Seong Joon Oh" ]
Deep classifiers are known to be sensitive to data distribution shifts, primarily due to their reliance on spurious correlations in training data. It has been suggested that these classifiers can still find useful features in the network's last layer that hold up under such shifts. In this work, we question the use of last-layer representations for out-of-distribution (OOD) generalisation and explore the utility of intermediate layers. To this end, we introduce \textit{Intermediate Layer Classifiers} (ILCs). We discover that intermediate layer representations frequently offer substantially better generalisation than those from the penultimate layer. In many cases, zero-shot OOD generalisation using earlier-layer representations approaches the few-shot performance of retraining on penultimate layer representations. This is confirmed across multiple datasets, architectures, and types of distribution shifts. Our analysis suggests that intermediate layers are less sensitive to distribution shifts compared to the penultimate layer. These findings highlight the importance of understanding how information is distributed across network layers and its role in OOD generalisation, while also pointing to the limits of penultimate layer representation utility. Code is available at https://github.com/oshapio/intermediate-layer-generalization.
[ "transfer learning", "intermediate layers", "learning dynamics", "OOD generalization" ]
Accept (Poster)
https://openreview.net/pdf?id=ByCV9xWfNK
https://openreview.net/forum?id=ByCV9xWfNK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yYypGwZ7P3", "xgr58xbMgk", "wKAd6H7tRb", "wI3z7sbmA6", "pzSjQI27Id", "paSEucgPQJ", "n1RXA2jSzL", "kfiYwrTYoo", "hRyniQalZl", "gtp1dWZjLV", "gG3Y8uJwFN", "dTey9rEbEv", "Z4uji1yqsL", "YKWGjUJzak", "Y1hjsA1bHU", "WwB4Ys6Xvm", "Txl2m1wpg4", "TUvOcFlvv9", "PJXBILthhs", "JkdBu9RXWJ", "BsAQI3m2vZ", "BSAOT0tQfN", "ATYMLbxLmA", "8v4J6xGZcc", "7KJzLe8Fcb", "5DFwrwED93", "1qDa5RSbjd" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732430307481, 1734497276423, 1732430995047, 1732209380875, 1737523908715, 1730732600571, 1732299409038, 1730835053106, 1732071372309, 1732201647072, 1731082772625, 1732480544873, 1732076192765, 1732430095408, 1732074169124, 1732143068929, 1730070558519, 1732062913141, 1733202322348, 1732142859045, 1730630233014, 1732430446782, 1732564098764, 1730981110492, 1732310877816, 1732066431798, 1732447907748 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Area_Chair_nHnS" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_G9DE" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_TgHf" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_ncZg" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_TgHf" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_JuAB" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_G9DE" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_P4oy" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_Nh8y" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_JuAB" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_TgHf" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_P4oy" ], [ "ICLR.cc/2025/Conference/Submission8439/Authors" ], [ "ICLR.cc/2025/Conference/Submission8439/Reviewer_Nh8y" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for the score increase. We have updated the PDF to include a reference in the main text to the results on the influence of feature dimensionality in Appendix C.3. We sincerely appreciate the suggestion.\"}", "{\"metareview\": [\"(a) summary\", \"This paper investigates whether the intermediate layers of a deep neural network offer better out-of-distribution (OOD) generalization than earlier and last layers. It demonstrates that in a number of settings, including zero- and few-shot, training classifiers on intermediate layer representations tends to perform better than last-layer retraining, suggesting intermediate layers are less sensitive to distribution shifts.\", \"(b) strengths\", \"The paper is well motivated: it explores what layers are robust to distribution shift which is important for OOD generalization.\", \"It introduces a new metric \\\"sensitivity score\\\" for measuring layer sensitivity to distribution shift, feature sensitivity.\", \"The interesting observation challenges the common practice of relying solely on the last layer's representations.\", \"It is well-written and easy to follow.\", \"(c) weaknesses\", \"It has limited novelty: the novelty of this contribution is somewhat diminished given that an earlier paper (accepted at last year's ICLR) has also proposed evaluating intermediate layers\\u2019 effectiveness in generalization for OOD samples:\", \"[2] Gerritz et al. (2024) \\\"Zero-shot generalization across architectures for visual classification.\\\" The Second Tiny Papers Track at ICLR 2024, https://openreview.net/forum?id=orYMrUv7eu\", \"An extended version of that paper (from the same group) also used linear probes to quantify the out-of-sample accuracy of intermediate representations:\", \"[3] Dyballa et al. (2024). A separability-based approach to quantifying generalization: which layer is best?. arXiv preprint arXiv:2405.01524, https://arxiv.org/abs/2405.01524\", \"The experiments do not include some datasets and models.\", \"The results are not significant for transformer but CNN.\", \"It lacks details on some concepts such as \\\"information content\\\".\", \"It is not practical because it is not easy to find the best layer.\", \"(d) decision\"], \"this_paper_is_well_motivated\": \"it explores what layers are robust to distribution shift which is important for OOD generalization. The interesting observation challenges the common practice of relying solely on the last layer's representations. The authors have a successful rebuttal, and all reviewers recommend accept.\", \"additional_comments_on_reviewer_discussion\": \"The main concerns of the reviewer are on the limited novelty and missing experiments. The authors' rebuttal and added experiments helped to address the concerns by the reviewers. All reviewers raised their scores to accept.\"}", "{\"comment\": \"Since the discussion phase is closing soon, we wanted to check if our response has addressed your concerns. We\\u2019d be happy to hear your thoughts and engage in further discussion if needed.\"}", "{\"comment\": \"Dear reviewer TgHf,\\n\\nThank you for the comment. We amended the text to reflect your suggestion and additionally explicitly indicated the distributions from which the dataset is drawn in both scenarios. We would be happy to engage in further discussion if any questions arise.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper investigates how intermediate layers in deep neural networks (DNNs) can improve out-of-distribution (OOD) generalization, challenging the common practice of relying solely on the last layer's representations. The authors use the notion of Intermediate Layer Classifiers, which consist of linear probes attached to each intermediate layer of the network but trained specifically to perform OOD classification. Two scenarios as considered: one \\\"few-shot\\\", in which few OOD samples are available for training, and one \\\"zero-shot\\\", where no OOD samples are available, under several types of distribution shifts, datasets, and architectures. They conclude that in both cases training classifiers on intermediate layer representations tends to perform better than last-layer retraining, and suggest that one of the reasons for this phenomenon is that intermediate representations are less sensitive to distribution shifts.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is very well-written.\", \"The paper assesses intermediate-layer performance across a variety of tasks, datasets, and architectures, which strengthens the generalizability of the findings.\", \"Interesting discussion on why intermediate layers might generalize better to OOD data, including their reduced sensitivity to distribution shifts and improved data efficiency.\", \"By demonstrating that intermediate layers can achieve competitive results even in few-shot or zero-shot settings, this work provides practical benefits, especially for applications where OOD data is scarce or unavailable.\"], \"weaknesses\": \"__Sensitivity Analysis:__ although the sensitivity analysis aims at showing the stability of intermediate-layer features, it lacks sufficient depth on why some intermediate layers are more stable than others in different types of shifts, particularly for different architectures. The inclusion of a summary of any trends observed across architectures/datasets/shifts would be helpful.\\n\\n__Figure 10:__ The point made in the description of Figure 10 is not clear from the plots shown, especially since it is not clear how many points occlude one another in the scatter plots. What dataset was used here, and what model? The quantitative analysis is also confusing: although the authors say that an increasing separation exists in earlier layers, I believe this only holds for the minority groups. Perhaps the authors should clarify this.\\n\\n__Terminology:__ I found the terminology a bit confusing when dealing with training points vs. testing points vs. probe points vs. OOD points. Namely, I had trouble following which sets contained OOD points vs. ID points, especially in the zero-shot section. It would be advisable to always point out which points are OOD.\\n\\n__Novelty:__ The literature reviewed for related work seems superficial. A notable example of a paper missing from the references is:\\n\\n[1] Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). \\\"How transferable are features in deep neural networks?\\\"_Advances in Neural Information Processing Systems_, 27. https://proceedings.neurips.cc/paper_files/paper/2014/hash/375c71349b295fbe2dcdca9206f20a06-Abstract.html\\n\\nwhich explored a method for quantifying the transferability of features from each layer of a deep network, including their applicability to shifts in the data (e.g., different subsets, and even different classes).\\n\\nFurthermore, the novelty of this contribution is somewhat diminished given that an earlier paper (accepted at last year's ICLR) has also proposed evaluating intermediate layers\\u2019 effectiveness in generalization for OOD samples:\\n\\n[2] Gerritz et al. (2024) \\\"Zero-shot generalization across architectures for visual classification.\\\" _The Second Tiny Papers Track at ICLR 2024_, https://openreview.net/forum?id=orYMrUv7eu\\n\\nIn fact, an extended version of that paper (from the same group) also used linear probes to quantify the out-of-sample accuracy of intermediate representations:\\n\\n[3] Dyballa et al. (2024). A separability-based approach to quantifying generalization: which layer is best?. _arXiv preprint arXiv:2405.01524_, https://arxiv.org/abs/2405.01524\\n\\nThis paper would certainly benefit from acknowledging that similar observations have been previously made when performing category discovery on unseen classes (as opposed to the distributional shifts here studied).\\n\\n__Frozen weights:__ The fact that the authors only considered frozen, pre-trained weights in all scenarios studied is somewhat unrealistic or at least incomplete. In any practical application of this method, if it is feasible to fine-tune the pretrained model on one's own data, then that will certainly be preferable. See reference [1] for an example in which both cases (frozen weights vs fine-tuned) are considered. Specifically for this paper, it would be interesting to verify whether the comparisons between penultimate layer and intermediate layer outputs hold in that scenario.\", \"questions\": [\"__Figure 3:__ which ResNet was used to produce the bar plot?\", \"__Figure 4:__ which ResNets were used to produce these plots? Was any trend observed when comparing the different ResNet sizes?\", \"__Section 4.3:__ can we really call \\\"zero-shot\\\" a setting in which the \\\"OOD\\\" data is composed of ID samples with added noise? That is a standard modification used in data augmentation (used to increase the number of ID samples), for example, so such a \\\"distribution shift\\\" seems extremely mild to be considered zero-shot. I would have liked to see a discussion of the different shifts chosen compare to one another in terms of difficulty.\", \"Did the authors find that any particular architecture had intermediate representations consistently outperforming last-layer retraining?\", \"Did the best layer for a particular dataset seem to agree with the best layer for other datasets, for the same model?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I've checked the comments by the other reviewers, and I think I still recommend acceptance. Amongst the concerns, I think it is important that the authors put the answer to the question about feature extraction protocol from Reviewer P4oy into the paper.\"}", "{\"summary\": \"This paper shows that in ResNets and ViTs, intermediate layers classification is more robust to the distribution shifts of the out-of-distribution samples (OODs). In particular it shows that the features of the pen-pen-ultimate layer consistently outperform the features of the pen-ultimate layer, when it comes to the zero-shot and few shot cases. These results are interesting, as they contradict the current belief that it is enough to retrain the last layer, that is the classification layer. It is assumed that each layer computes interesting invariants of an input image, and this paper raises the possibility that some of these invariants get lost in downstream layers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This is a very intriguing paper, that raises the question of what happens in the last layers, that overcomes generalization to OOD samples?\", \"weaknesses\": \"While analyzing the depth and the sensitivity as important factors in generalization ability, it would be interesting to understand what happens in the last layers that reduces generalizability?\", \"questions\": \"In principle, the features of intermediate layers, are propagated down all the way to the pen-ultimate layer, through skip connections, in both ResNets and ViTs. However, they are added before every layer with the output of the previous layer.\\n\\nIs this the reason of why generalization ability decreases? \\n\\nFrom the point of view of back-propagation the addition does not matter, as the derivative of a sum is the sum of the derivatives. However, this addition may clutter the features?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your insightful comments.\\n\\n**W1/Q1: Definition of \\u201cfew-shot\\u201d may differ from conventional standards**\\n\\nWe use the term \\\"few-shot\\\" to describe experimental setups where some OOD points are available at training time for probing. While Section 4.2.1 uses a significant portion of the test dataset, Section 4.2.2 explores settings with varying numbers of OOD points, including extremely low availability (as low as 1% of the test set). This corresponds to the following sample sizes per class:\\n\\n- CIFAR-100C: 2 samples \\n- CIFAR-10C: 5 samples \\n- CMNIST: 25 samples \\n- Waterbirds, CelebA, MultiCelebA: 10 samples \\n\\nWe believe that sample sizes of this magnitude are consistent with the few-shot setting, as they represent minimal data availability while still enabling meaningful evaluation.\\n\\n**W2/Q2: Reason for ID-trained probes improving OOD generalization isn't clear**\\n\\nWe agree that the result may seem counterintuitive. In Section 5.2 (*Feature Sensitivity in Intermediate and Penultimate Layers*), we define a global metric to measure how sensitive each layer is to distribution shifts. The analysis shows that intermediate layers are consistently less sensitive to such shifts compared to the penultimate layer.\\n\\nWith this setup in mind, our intuition is as follows: intermediate layers capture more general and robust features than the penultimate layer. In contrast, the penultimate layer is optimized for in-distribution performance and is therefore more sensitive to shifts. For example, if an image is perturbed by noise, the features in the penultimate layer may change significantly, while those in intermediate layers remain more stable. Classifiers built on these robust intermediate-layer features are better suited for generalization, even in the zero-shot setting. This is because the OOD representations lie closer to those of ID, enabling the classifier to correctly generalize to OOD points.\\n\\n**W3/Q3 The use of a single-layer linear classifier at an intermediate layer may restrict its ability to capture more complex feature relationships**\\n\\nThank you for pointing this out. While our goal was to present a simple method and challenge previous works claiming the transferability of penultimate-layer features, we agree that incorporating a more complex classifier is an important consideration.\\n\\nTo address this, we conducted additional experiments on CIFAR-10C and CIFAR-100C using ResNet18 and ViT models with an MLP to assess both penultimate and intermediate layer representations. The results, detailed in Appendix C.2 of the updated PDF, showed negligible performance improvements over linear probes for the penultimate layer representations but some improvement for intermediate layer representations. This suggests that while intermediate layers benefit from more complex probes, the limited performance of penultimate layers is not due to the probe\\u2019s capacity but rather the absence of sufficient learnable information in the representations.\\n\\n**W4: Relation to previous work in Neural Collapse**\\n\\nThank you for this excellent reference. We believe [1] complements our work, with key differences as follows:\\n\\n1. [1] focuses on transfer to new datasets (e.g., from ImageNet-1k pre-trained models), whereas our work examines distribution shifts within the same dataset, including visual shifts (ImageNet, CIFAR) and subpopulation or conditional shifts.\\n2. [1] proposes fine-tuning the layer with the greatest neural collapse (NC$_1$) on the downstream dataset alongside a linear classifier on that layer\\u2019s features, whereas our approach keeps the model parameters fixed and does not involve any parameter updates.\\n\\nTo further explore the role of Neural Collapse in dataset-specific models, we followed the setup of [1] and measured NC$_1$ under CIFAR-10C and CIFAR-100C distribution shifts. These results, detailed in Appendix C.1 of the updated PDF, reveal that the penultimate layer consistently exhibits the highest NC$_1$, even under distribution shifts. This contrasts with the findings in [1], which report dataset-dependent variability in the most-collapsed layer. We hypothesize this difference arises because [1] studies general pre-trained models, while our experiments focus on dataset-specific models.\"}", "{\"comment\": \"Dear authors,\\n\\nMany thanks for the thorough rebuttal, it addresses my concerns. I am also checking the rest of the reviews and will come back on it soon. \\n\\n\\\"selected ID for the zero-shot scenario and OOD for the few-shot scenario\\\"\\nWould be great if the authors amend the text of the paper accordingly.\"}", "{\"summary\": \"This paper investigates the use of intermediate layer linear probing classifiers to enhance a model's performance on out-of-distribution (OOD) datasets, where the test data distribution differs significantly from the training data distribution. People usually assume that the network can act as a combination of general feature extraction(early layers) plus in-domain task specialization(later layers). The method builds on the assumption that not only the the last classifier specializes in the in-domain distribution, last few layers in the backbone beyond the final classifier also focus on the in-domain distribution. So it propose to place the probing classifier at an intermediate layer to improve transferability and adapt to OOD settings\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The paper includes comprehensive experiments demonstrating that intermediate layer probing consistently outperforms last-layer probing in OOD scenarios. This is shown to hold across various network architectures and datasets.\\n\\n2. By investigating intermediate layer probing, the paper provides a theoretical perspective on measuring distribution shifts within network representations\", \"weaknesses\": \"1. The definition of \\u201cfew-shot\\u201d may differ from conventional standards, as it uses a significant portion of the test dataset (e.g., half of the CMNIST test samples) rather than the typically small sample sizes seen in few-shot learning. This discrepancy may limit the generalizability of the results.\\n\\n2. I'm confused that why the zero-shot probing setting can serve as a method to improve the OOD performance. I thought that when trained on in-domain data alone, the OOD performance would not improve since no OOD data is used to train the probing classifier. Would the zero-shot setting can improve the OOD performance?\\n\\n3. The use of a single-layer linear classifier at an intermediate layer may restrict its ability to capture more complex feature relationships necessary for challenging OOD tasks. Incorporating a more complex classifier, such as a multi-layer perceptron (MLP) or adaptors, could potentially improve performance on these tasks.\\n\\n4. The main assumption aligns closely with [1], which suggests that layer-wise neural collapse (NC) progresses through two distinct phases in downstream tasks: Phase 1, where representations exhibit a progressively decreasing trend, capturing universal feature mappings; and Phase 2, characterized by a progressively increasing trend, focusing on task-specific feature mappings. A comparison with this prior work would provide valuable insights into how the proposed method builds upon or diverges from these observed phases.\\n\\n[1] Li, X., Liu, S., Zhou, J., Lu, X., Fernandez-Granda, C., Zhu, Z., & Qu, Q. (2022). Principled and efficient transfer learning of deep models via neural collapse. arXiv preprint arXiv:2212.12206.\", \"questions\": \"Based on the weakness I have the following questions:\\n\\n1. Could you clarify how \\\"few-shot\\\" is defined in this context? The use of half the CMNIST test dataset may diverge from traditional definitions of few-shot, which usually refer to very limited samples (e.g., 1\\u20135).\\n\\n2. Could you provide more details on why zero-shot probing\\u2014where only in-domain data is used\\u2014seems to enhance OOD performance? This mechanism isn\\u2019t entirely clear, as one would generally expect some OOD data to be necessary for adapting to distribution shifts.\\n\\n3. Has the team considered testing more complex classifiers at the intermediate layer, such as a two-layer MLP? Or add an adaptor between backbone and the linear probing? Since pruning the intermediate layers may omit useful deep features, a more complex classifier could potentially bridge this gap and enhance OOD results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No Ethics Concerns\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their detailed response. I believe the paper has significantly improved after the authors addressed my concerns and those from other reviewers, so I am increasing my score to 6.\"}", "{\"comment\": \"Thank you for an insightful review.\\n\\n**W1: It would be interesting to understand what happens in the last layers that reduces generalizability?**\\n\\nWe agree that understanding this is important. Our findings suggest that the penultimate layer\\u2019s features are highly specialised for in-distribution tasks. This specialisation may reduce robustness to distribution shifts. The sensitivity analysis in section 5.2 supports this, showing that penultimate-layer features are more sensitive to shifts than those in intermediate layers. This remains an open question, and we would be interested in exploring it further.\\n\\n**Q1: Is the reason for the decrease in generalization due to skip-connections?**\\n\\nThank you for raising this point. Skip connections propagate intermediate features through the network. In theory, this should preserve generalisable information. However, our results suggest that the penultimate layer aggregates features in a way that does not fully retain robustness. Figure 6 shows GoogLeNet performance, which does not use residual connections like ResNets but still exhibits a large gap in performance between intermediate and penultimate layers. This indicates that the observed behaviour is not solely due to residual connections. We do not have conclusive evidence linking skip connections to reduced generalisability, but this hypothesis is interesting and merits further investigation.\"}", "{\"comment\": \"**W1: Trends across architectures/datasets/shifts; Q5: Agreement of layers across shifts for the same architecture**\\n\\nWe have extended our analysis to provide a fine-grained view of the alignment between ILCs performance across layers, datasets, and shifts for CIFAR-10C and CIFAR-100C. Detailed results are included in Appendix C.4 of the updated PDF.\\n\\nAcross all tested shifts on CIFAR-10C and CIFAR-100C for ResNet18, the intermediate layer ($\\\\ell = 7$) _always_ outperforms the penultimate layer ($\\\\ell = 8$). Regarding agreement across datasets for the same shift, the best-performing layer on CIFAR-10C is often not the best-performing layer on CIFAR-100C.\\n\\nShould additional questions regarding cross-transfer among shifts or datasets arise, we would be happy to engage in further discussion.\"}", "{\"comment\": \"Thank you for insightful comments.\\n\\n**Q1: Why would the data be selected ID for the zero-shot scenario and OOD for the few-shot scenario?**\\n\\nIn the *zero-shot scenario*, $D_\\\\text{probe} \\\\sim P_\\\\text{ID}$, as no OOD data is used for training the probes\\u2014only the original training data that the backbone was trained on. In the *few-shot scenario*, $D_\\\\text{probe} \\\\sim P_\\\\text{OOD}$, aligning with the last-layer retraining paradigm, where limited OOD samples are used for adaptation. \\n\\nWe\\u2019d be happy to discuss this further if we misunderstood your question.\\n\\n**Q2: Inclusion criteria of models and datasets isn't clear**\\n\\nThank you for the question. We agree that the inclusion criteria should have been explained in more detail. The selection of models and datasets is closely intertwined, as we wanted to avoid training models on ID datasets ourselves. Each (model, dataset) pair typically requires specific recipes, and we aimed to prevent misrepresenting results by relying exclusively on publicly available pre-trained models. Essentially, we focused on distribution shifts where neural networks tend to underperform and selected datasets within each domain based on their popularity and usage in the community.\\n\\nWe deliberately excluded VTAB-1k datasets because they consist of subsets from multiple datasets, conflicting with our focus on evaluating models trained within a single domain. VTAB-1k evaluates performance across diverse datasets, making it more suitable for studying generalization across tasks rather than distribution shifts within a domain. This fundamental difference in scope is why we did not include VTAB-1k in our evaluation.\\n\\nWe have updated Section 4.1 to include the selection criteria for primarily using ViTs and ResNets due to their differing inductive biases under \\\"DNN model usage and selection\\\" and the dataset selection criteria in Table 2. Due to space constraints, adding a new subsection was challenging, so these updates were integrated into existing sections.\\n\\n**Q3: ILCs on generic pre-trained models on dataset shifts**\\n\\nThank you for raising this interesting question. Our work closely follows the last-layer retraining literature, and we deliberately restricted the scope to distribution shifts within a dataset, rather than transfer across datasets.\\n\\nTesting ILCs on generic pre-trained models like ImageNet-pretrained ViTs or DinoV2 would not align with our zero-shot framework, a key contribution of our work, where no OOD data is used for training. These models inherently involve pretraining on diverse datasets, which conflicts with the assumptions of our setup. While testing such models would be interesting, it falls outside the intended scope of our current study.\"}", "{\"title\": \"(2/2)\", \"comment\": \"**Q1: Which ResNets were used to produce the plots (Figures 3, 4)?**\", \"figure_3\": \"ResNet18. Figure 4: ResNet-50 (Waterbirds, CelebA) and ResNet18 (remaining plots). This information has been added to the updated manuscript, thank you for pointing this out.\\n\\n**Q2: Was any trend observed when comparing different ResNet sizes in Figure 4?**\\n\\nThe influence of ResNet size appears minimal. In cases with limited training data for probes, the gap between ILCs and last-layer retraining methods depends more on the dataset than the architecture.\\n\\n**Q3: Can we really call \\\"zero-shot\\\" a setting in which the \\\"OOD\\\" data is composed of ID samples with added noise?**\\n\\nThank you for this perspective. To clarify, in all \\\"zero-shot\\\" settings, we do not use any OOD data during training (e.g., no noise is added to ID samples during this phase). While we agree that noise addition conceptually represents a mild distribution shift, we note the following:\\n1. In CIFAR-10C/-100C experiments, we used the highest noise level (level 5). \\n2. Classifiers trained on noise-free CIFAR-10/100 datasets perform poorly on noisy datasets, with a performance drop of ~25% on CIFAR-10C (see Figure 1), illustrating the practical significance of this shift.\\n\\nGiven that the model was not originally trained on all noise variations and exhibits a drastic performance drop on noisy datasets, we believe this constitutes a valid OOD task.\\n\\n**Q4: Did the authors find that any particular architecture had intermediate representations consistently outperforming last-layer retraining?**\\n\\nYes, intermediate representations consistently outperformed last-layer retraining across nearly all dataset-architecture pairs. ResNets, in particular, exhibited consistent trends in all experiments. The only exceptions were CelebA and Waterbirds, where intermediate representations performed similarly to last-layer retraining in the few-shot setting with the full OOD dataset used for training. However, in zero-shot settings and cases where only a fraction of the data was used, ILCs continued to outperform last-layer retraining. We hope this answers your question, and we are happy to provide further clarification if needed.\\n\\n**Q5: Did the best layer for a particular dataset seem to agree with the best layer for other datasets, for the same model?**\\n\\nThank you for this interesting question. Figure 18 in the Appendix shows accuracies per layer in both zero- and few-shot settings. While the exact position of the best layer varies across datasets, it is typically located in the second half of the network, as discussed in Section 5.\"}", "{\"summary\": \"This paper demonstrates that leveraging intermediate features outperforms conventional approaches relying on last-layer features for OOD generalization. The claim is substantiated through extensive experiments across diverse datasets, varying data quantities, model architectures, and types of distribution shift. Additional analysis on feature sensitivity further reveals that intermediate features exhibit greater robustness to distribution shifts.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and clearly presented.\", \"The experiments are comprehensive, covering multiple aspects.\", \"The analysis of feature sensitivity is interesting.\"], \"weaknesses\": [\"The most convincing experimental results supporting the paper\\u2019s claim are observed on CNNs, whereas the performance gains on (currently) more widely used transformer-based architectures are less pronounced, with intermediate features providing only marginal improvement over last-layer features. It may worth discussing the potential reason for this performance disparity.\", \"More critically, while the paper offers detailed descriptions of the data and model setups, it lacks a clear feature extraction protocol. For instance, when using an intermediate feature with dimensions $256 \\\\times 4 \\\\times 4$, do the authors apply pooling before probing or directly flatten the representation? This question is important since ConvNets almost always have a feature dimension change across layers and this may contribute to the performance difference between intermediate features and last-layer features.\"], \"questions\": \"Please see the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for insighftul comments. We address them below.\\n\\n**W1: Gap between transformers and CNNs**\\n\\nThank you for raising this point. We currently lack a definitive explanation but hypothesize it may relate to differences in receptive fields or architectural biases between CNNs and transformers. We will add a discussion on this in the paper\\n\\n**W2: Influence of Feature Dimensionality**\\n\\nWe acknowledge that the feature extraction protocol was insufficiently clarified in the current draft and will be addressed this in the revised manuscript.\\n\\nSpecifically, in this work, we did not apply any form of feature pooling before probing. While we understand the reviewer\\u2019s concern regarding the potential impact of feature dimensionality on performance, we believe this factor is secondary to the informativeness of features themselves. Our perspective is twofold: \\n1. For ILCs to succeed, the features in intermediate layers must be inherently informative of the task. If such features exist at these layers, dimensionality alone should not significantly influence performance (either the useful features are there or they are not). \\n2. Feature dimensionality does not explain the observed pattern in in-distribution (ID) performance, where the highest performance is almost always achieved using last-layer features. As shown in Figure 18 of the Appendix, ID performance peaks, or is near the peak, when using penultimate layer representations in the classifier $ILC_{L-1}$. \\n\\nTo ensure dimensionality is not a confounding factor, we conducted an additional experiment on CIFAR-10C using ResNet18. Initial output dimensions of layers were as follows: Layer 4: $16,384$, Layer 5: $8,192$, Layer 6: $4,096$, Layer 7: $2,048$, and Layer 8: $512$. \\n\\nRepresentations $\\\\mathbf{R}_i$ were extracted from training samples at each layer $i$, centered as $\\\\bar{\\\\mathbf{R}}_i = \\\\mathbf{R}_i - \\\\text{mean}(\\\\mathbf{R}_i)$, and the top $512$ PCA directions $\\\\mathbf{V}_i \\\\in \\\\mathbb{R}^{d_i \\\\times 512}$ were computed. For a test $\\\\mathbf{x}$, its representation $\\\\mathbf{r}_i(\\\\mathbf{x})$ were centered using the training mean and projected as $(\\\\mathbf{r}_i(\\\\mathbf{x})- \\\\text{mean}(\\\\mathbf{R}_i)) \\\\mathbf{V}_i$. Linear probes were trained and evaluated on these projections in few-shot setup. This setup ensures fixed-dimensional representations across layers, isolating the role of informativeness. We present the results in the table below.\\n\\n| Layer | Accuracy (Before PCA) | Accuracy (After PCA) |\\n|-------|-----------------------------|----------------------------|\\n| 5 | 74.3% | 71.4% |\\n| 6 | 75.3% | 74.6% |\\n| 7 | 70.4% | 71.3% |\\n| 8 | 69.9% | 69.9% |\\n\\nCompared to the results before PCA, the mean accuracy differences are minor, particularly for Layer 6 (the highest-performing layer in both setups). This supports the conclusion that dimensionality alone does not explain the trends observed in layer-wise performance, as the fixed-dimensionality setup after PCA produces similar relative patterns across layers.\\n\\nWe're happy to engage in further discussion.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThank you for your thoughtful feedback and recognition of our work. We are particularly encouraged by your comments, including:\\n\\n1. Challenging \\\"the conventional wisdom of using the last layer for zero- and few-shot learning\\\" (`TgHf`).\\n2. Describing it as a \\\"very intriguing paper\\\" and raising questions about \\\"what happens in the last layers that overcomes generalization to OOD samples\\\" (`ncZg`).\\n3. Highlighting the \\\"practical benefits, especially for applications where OOD data is scarce or unavailable\\\" (`G9DE`).\\n4. Recognizing that the paper is \\\"well-written and easy to follow\\\" (`Nh8y`, `G9DE`, `TgHf`, `P4oy`).\\n5. Acknowledging the clarity and thoroughness of our experiments, which evaluate intermediate-layer performance across various datasets and architectures (`G9DE`, `JuAB`).\\n\\nWe also greatly appreciate your valuable suggestions, which led to updates including:\\n\\n- A PCA-based analysis confirming that the improvement in ILCs' performance is not due to feature dimensionality (`P4oy`).\\n- Experiments with non-linear probes, showing marginal improvements in the penultimate layer and supporting the robustness of linear probes (`JuAB`).\\n- An analysis of Neural Collapse across layers, highlighting differences in robustness (`JuAB`).\\n- Results validating the transferability of findings across datasets (`TgHf`).\\n- Extensions to related work, specifying how our work relates to prior research (`G9DE`, `JuAB`).\\n\\nWe also value the exciting questions and suggestions raised by reviewers for potential future work. Your feedback has strengthened the work, and we are grateful for your engagement!\"}", "{\"title\": \"(1/2)\", \"comment\": \"Thank you for your insightful review.\\n\\n**W1: Sensitivity analysis lacks depth on why some intermediate layers are more stable than others in different types of shifts**\\n\\nThank you for the suggestion. We agree that understanding why some intermediate layers are more stable than others is an important question, especially across architectures. Answering this is challenging, as even within a single dataset like CIFAR-10C, performance varies significantly across different types of noise (see Figure 15 in the Appendix). This suggests that the stability of intermediate layers depends on both the nature of the shift and the architecture.\\n\\nWe are actively analyzing these trends across architectures and will incorporate the findings into the paper.\\n\\n**W2: Analysis in Fig. 9 / Fig. 10 isn't clear**\\n\\nThank you for the feedback. We clarify the intended points and outline the changes made to the paper for better clarity.\\n\\n*Quantitative Analysis (Figure 9)*\\nWe argue that earlier layers exhibit decreasing separation or sensitivity, particularly for minority data, which we regard as OOD. This aligns with your observation that the trend primarily holds for minority groups. The same conclusion extends to other shifts, such as CIFAR-10C and CIFAR-100C, as shown in Figure 19 in the appendix. The key point is that for the minority group, the penultimate layer separates ID and OOD points most distinctly.\\n\\n*Qualitative Analysis (Figure 10)*\\nThe plot shows train and test samples under PCA projections using the MultiCelebA dataset and a ResNet18 model. Thank you for noticing this missing detail. We have included this information in the updated version of the paper. We have also clarified the penultimate layer's separation for a minority group in the caption.\\n\\n**W3: Terminology: training points vs. testing points vs. probe points vs. OOD points**\\n\\nTo clarify this, we have explicitly indicated in the few- and zero-shot sections whether $\\\\mathcal{D}_{\\\\text{probe}}$ is sampled from ID or OOD distributions. Additionally, we have referenced Section 3.1 in these sections and included a graphical representation of the data used in the probes (see Table 1).\\n\\n**W4: Relation to previous works**\\n\\nThank you for these references. The key distinction is that the cited works primarily focus on generalization to novel class splits, whereas our work addresses scenarios where the task and visual variations remain closely aligned with the original distribution. We have updated the manuscript to relate our work to the previous studies.\\n\\n**W5: Authors only considered frozen, pre-trained weights in all scenarios studied, which is somewhat unrealistic or incomplete**\\n\\nWe agree that fine-tuning pre-trained models is common in practice. However, all models in our experiments were tailored to their respective datasets. For example, CelebA and Waterbirds models were fine-tuned on their specific datasets starting from ImageNet-pretrained weights, while CIFAR-10 models were trained from scratch. This setup ensured that each model was well-aligned with its target dataset, enabling a consistent evaluation of intermediate-layer representations without introducing additional variability from further fine-tuning.\"}", "{\"summary\": \"This paper questions the use of last-layer representations for out-of-distribution (OOD) generalization due to their sensitivity to data distribution shifts. Instead, by introducing intermediate layer classifiers (ILC), the paper shows that intermediate layer representations often outperform penultimate layer representations in generalization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and easy to follow. The observation is interesting that intermediate layers may perform better than the last layer in generalization. And, the paper introduces a new metric named \\\"sensitivity score\\\" to measure the sensitivity of each layer to distribution shifts.\", \"weaknesses\": \"1. The paper discusses a concept called \\\"information content\\\", but it lacks some detailed explanation. The authors need to demonstrate why the accuracy on OOD tasks can characterize \\\"information content\\\".\\n2. The potential impact is not so clear. The observation is interesting, but it may be difficult to find the \\\"best\\\" layer in practice. \\n3. It is better to provide more theoretical backups to support the experimental findings.\", \"small_typos\": \"\\\"atop\\\" (line 469); \\\"in earlier layers\\\" (line 497); \\\"$\\\\pi$\\\" not defined (line 500)\", \"questions\": \"1. Could the authors provide more explanations about the feasibility of using the sensitivity score as defined in the paper? Why is this score appropriate for characterizing the sensitivity?\\n2. All the experiments are based on the task of image classification. What will the results become when dealing with other tasks besides image classification?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thorough review. We have amended the text to reflect your suggestion and have included the reference to the feature extraction protocol question raised by Reviewer P4oy in the updated paper.\"}", "{\"comment\": \"Thank you for the additional experiments. As most of the concerns are addressed, I would raise the score from 5 to 6.\"}", "{\"summary\": \"The authors explore whether the intermediate layers offer better out-of-distribution (OOD) generalisation. They found that in a number of settings, including zero- and few-shot, they do and demonstrate the limits of penultimate layer representation utility. Most interestingly, the authors claim that \\u2018earlier-layer representations without fine-tuning on the target distribution often fare competitively with the last-layer retraining on the target distribution\\u2019.\\nThe contribution of the paper is empirical, and the authors support the claims with extensive number of experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Pros:\", \"Significance: The question of generalisation, especially to the zero-shot scenarios, is an important one. While I think it\\u2019s been tremendous work done, s I discuss in comment 2, it is important to discuss a number of limitations such as selection of evaluation dataset,\", \"Correctness: I haven\\u2019t spotted any incorrect statements in the paper\", \"Motivation: the paper challenges the conventional wisdom of using the last layer for zero and few-shot learning, and I believe it\\u2019s a great motivation\", \"Reproducibility: I\\u2019ve checked the experimental setting and it looks reproducible to me. I would also highlight that the description of the experiments is exceptionally clear\", \"Clarity: the paper is very clearly written (apart from a couple of comments below)\"], \"weaknesses\": \"Cons:\\n- Clarity and limitations: a few questions on the experimental setting\", \"questions\": \"1. \\u201cLine 198-199: using \\u00a0data that can be either ID or OOD, depending on whether \\u00a0the setup is zero-shot or few-shot.\\u201d Not sure I get this phrase, why would the data be selected ID for the zero-shot scenario and OOD for the few-shot scenario? Shouldn\\u2019t it be both ID and OOD for both cases (or just OOD)?\\n2. The biggest question and concern is the inclusion criteria for both datasets and the models. The dataset performance beyond most-known datasets such as CIFAR-10 and CIFAR-100 may differ not least due to the fine-grained nature of some of these datasets. Figure 7 from Tobaben et al (2023), for example, chose to include VTAB-1k datasets to test the performance on the different datasets. On the inclusion of the models, on one hand, it seems to give a fair share of ViTs and ResNets, however it would be great to spell out the inclusion criteria: why did the authors choose this particular selection of the model? Separate subsection in Section 4 seems to be an appropriate place for this. \\n3. The authors can argue that it\\u2019s not within the scope, so it would be a fair play, however, it would be interesting to see the following. It appears that the models have been trained on the clear target dataset (i.e., the authors evaluate the OOD performance of the CIFAR-100 trained model), and I wonder if they tried whether these findings generalise for models, pre-trained on generic large-scale data (i.e. ImageNet-pretrained or pretrained models such as DinoV2 or other varieties of pretrained ViT)? This may be useful in finding out whether the conclusion of the paper remain valid for the scenarios when \\n\\n\\nTobaben et al (2023) On the Efficacy of Differentially Private Few-shot Image Classification, TMLR 2023\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the additional experimental results provided by the authors.\\n\\nI think this PCA experiment is highly persuasive and significant. I recommend that the authors consider incorporating it into the main body of the paper in a future updated version.\\n\\nAs my primary concern has been addressed, I am increasing my score from 5 to 6.\"}", "{\"comment\": \"Thank you for insighftul comments. We address them below.\\n\\n**W1: Usage of \\\"information content\\\": relationship between accuracy and \\\"information content\\\".**\\n\\nThank you for raising this point. We refer to \\u201cinformation content\\u201d as the representations\\u2019 informativeness about the task under distribution shifts. Accuracy is used as a proxy for this informativeness, as it directly reflects how well the features support the task. We focus on the lack of informativeness in the penultimate layer because its representations are optimized for the original task, achieving high accuracy in-distribution. As shown in Section 4.2.1 (*Information Content for OOD Generalization at Last Versus Intermediate Layer*), these representations often perform worse on OOD tasks compared to intermediate-layer probes, highlighting their limited robustness for OOD generalization and thus their lack of information content, even with abundant OOD data.\\n\\nTo further support this argument, we conducted additional experiments using non-linear probes (e.g., multi-layer perceptrons) on intermediate-layer representations, as described in Appendix C.2 (in the updated version of the PDF). The results showed negligible performance improvements compared to linear probes. *This suggests that the lack of performance is not due to the probe\\u2019s limited capacity but rather the absence of learnable information in the representations*.\\n\\n**W2: It may be difficult to find the \\\"best\\\" layer in practice**\\n\\nWe agree that finding the best layer is not always straightforward. However, model selection typically involves a held-out testing set, which can include OOD datapoints [a, b, c]. Under this setting, selecting the best layer is analogous to selecting the best hyperparameters or training epochs when using a last-layer classifier. Since we assume access to already-pretrained models, this approach only requires training a set of linear probes, making it computationally similar to last-layer retraining methods [a].\\n\\n[a] Kirichenko, Polina, Pavel Izmailov, and Andrew Gordon Wilson. \\\"Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations.\\\" ICLR 2023 (2023). \\n[b] Sagawa, Shiori, et al. \\\"Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization.\\\" arXiv preprint arXiv:1911.08731 (2019). \\n[c] Gulrajani, Ishaan, and David Lopez-Paz. \\\"In search of lost domain generalization.\\\" arXiv preprint arXiv:2007.01434 (2020).\\n\\n**W3: Lacking theoretical backing** \\n\\nWe agree that a mathematical understanding of this phenomenon would be valuable. However, we believe this is challenging, as it requires developing insights specific to DNNs that extend beyond analyses of simpler, two-layer networks. Even for last-layer retraining methods, theoretical understanding is limited. Our empirical findings suggest that intermediate layers capture more general, task-invariant information and handle OOD data more similarly to ID data compared to the penultimate layer. Understanding how features evolve across layers, particularly why the penultimate layer sometimes also captures task-invariant features, remains an open question and is largely addressed in empirical studies at present.\\n\\n**Q1: Feasibility and appropriateness of feasibility score**\\n\\nThe key idea is to measure how features change from ID to OOD data. This is done per group (or per class, in classification problems), accounting for how each sub-cluster within the group spreads under distribution shifts. The score reflects a layer\\u2019s ability to preserve features consistent across ID and OOD data. By normalising with within-group distances, the metric avoids bias from the variability of ID data within a layer's features. Additionally, the score aligns with observed layer behavior, where intermediate layers exhibit lower sensitivity and greater robustness, supporting its validity as a meaningful measure of sensitivity. For these reasons, we believe this score is appropriate for evaluating feature sensitivity.\\n\\n**Q2: Tasks other than image classification**\\n\\nWhile we don\\u2019t have results for tasks beyond image classification, we hypothesize that the outcomes depend on the nature of the supervision signal. Tasks with fine-grained supervision may benefit less from intermediate layers, whereas tasks with coarser supervision signals may rely on them more.\"}", "{\"comment\": \"Thanks to the authors for their response, which addresses most of my concerns. I still recommend acceptance and maintain my rating.\"}" ] }
BxQkDog4ti
Range, not Independence, Drives Modularity in Biologically Inspired Representations
[ "Will Dorrell", "Kyle Hsu", "Luke Hollingsworth", "Jin Hwa Lee", "Jiajun Wu", "Chelsea Finn", "Peter E. Latham", "Timothy Edward John Behrens", "James C. R. Whittington" ]
Why do biological and artificial neurons sometimes modularise, each encoding a single meaningful variable, and sometimes entangle their representation of many variables? In this work, we develop a theory of when biologically inspired networks---those that are nonnegative and energy efficient---modularise their representation of source variables (sources). We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise. Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work. Rather we show that sources modularise if their support is ``sufficiently spread''. From this theory, we extract and validate predictions in a variety of empirical studies on how data distribution affects modularisation in nonlinear feedforward and recurrent neural networks trained on supervised and unsupervised tasks. Furthermore, we apply these ideas to neuroscience data, showing that range independence can be used to understand the mixing or modularising of spatial and reward information in entorhinal recordings in seemingly conflicting experiments. Further, we use these results to suggest alternate origins of mixed-selectivity, beyond the predominant theory of flexible nonlinear classification. In sum, our theory prescribes precise conditions on when neural activities modularise, providing tools for inducing and elucidating modular representations in brains and machines.
[ "neuroscience", "representation learning", "disentanglement", "modularisation", "neural networks", "hippocampus", "cortex" ]
Accept (Poster)
https://openreview.net/pdf?id=BxQkDog4ti
https://openreview.net/forum?id=BxQkDog4ti
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mCDvddqOVn", "lBG5zfkEJ7", "dMZ0J04z0U", "aNQ76IYNxt", "ZTQXeWbzAH", "YLVNNdC2Y6", "UXEZFluMGd", "UFjaqTmB1b", "U7as8MuATa", "S2OJHi4aFg", "LeiGdoxfSh", "LWuJUVjFBr", "LLek3ze3t7", "IxAGNIUnwQ", "E3iahrezeo", "BvGjwfBWAE", "AqmY8txVoD", "AUO434JFEK", "9SQ7tIVcwq", "59RkZBrlDR", "4UBzDt7SQC", "28jZPVjq3M" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733198766432, 1732168214998, 1732773754697, 1732773448508, 1730780888366, 1737523661646, 1732168593056, 1732168335542, 1732531410991, 1732168424643, 1732773077475, 1733714524754, 1732531559779, 1732168063661, 1732679286655, 1732168050397, 1732555214753, 1732531357451, 1730745856560, 1732167726851, 1730595677851, 1732168271556 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4777/Reviewer_1fwm" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Reviewer_a7qJ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Area_Chair_nLV9" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Reviewer_a7qJ" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Reviewer_Gn2p" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Reviewer_1fwm" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ], [ "ICLR.cc/2025/Conference/Submission4777/Reviewer_Gn2p" ], [ "ICLR.cc/2025/Conference/Submission4777/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for detailed response. The revision addressed some of my concerns. It helps to analyze fewer examples in greater depth. I still have substantial reservations about the results in Section 5 and the restrictive assumptions behind the theory. But given the improvement of the paper, I am happy to raise my score to 6.\"}", "{\"title\": \"Author Response\", \"comment\": \"We thank the reviewer for their careful and detailed review of our work that helped us significantly improve it. Below we hope to address some of their concerns.\\n\\n*Weakness 1: Writing could be improved, including theorem introduction in theorem 2.1*\\n\\nWe\\u2019re sorry for the unclear writing.To remedy this we have significantly changed the buildup to theorem 2.1 by adding additional details to section 2.2 clarifying the intuition that can be used for why range independence leads to modularity, which now reads:\\n\\n> The key takeaway lies in how $b_j$ is determined by a joint minimization over $s_1$ and $s_2$ (5). Assuming positive $w_{j1}$ and $w_{j2}$; then if $s_1$ and $s_2$ take their minima simultaneously, as in the middle row of Fig1a, then mixed bias must be large:\\n\\\\begin{equation}\\nb_j = -\\\\min_{s_1, s_2} \\\\left[ w_{j1} s_1 + w_{j2} s_2 \\\\right] = - \\\\min_{s_1}[w_{j1} s_1] - \\\\min_{s_2} [w_{j2} s_2] = b_{j'} + b_{j''}\\n\\\\end{equation}\\nAnd the energy of the mixed solution will always be worse than the modular, since $b_j^2 = (b_{j'} + b_{j''})^2 > b_{j'}^2 + b_{j''}^2$. Alternatively, mixing will be preferred when $s_1$ and $s_2$ do not take on their most negative values at the same time, as in the bottom row of Fig 1a, since then $b_j$ does not have to be as large to maintain positivity, and the corresponding energy saving satisfies the key inequality (6).\\n\\nFurther, we have added the following clarifying sentences after the introduction of theorem 2.1 to link it to the previously discussed intuition.\\n\\n> These inequalities come from the difference in activity energy between a modular and mixed solutions, just like the intuition we built up in Section 2.2, and in particular Equation 6.\\n\\nMore broadly, we have moved one of the neural analysis sections to the appendix and used the space to expand the other sections. We have expanded the discussion of the effects of range independence (new section 2.4), grown the figure labels to make it clearer, and rewritten the mixed selectivity part of the discussion. Our most pertinent changes were to figure 1, in answer to another of your comments:\\n\\n*Weakness 7: Unclear what\\u2019s going on in figure 1, came out too early*\\n\\nApologies for Figure 1. We have now completely re-worked it, moved it much later in the paper, and added a much extended caption to try and more clearly explain it. The new intuitive figure (new fig 1A) focuses on the difference between a mixed encoding of a range-independent and range-dependent pair of variables, following more closely the discussion in the intuitive section. Further, we have made it larger to make it clearer, and added to the caption to walk the reader through our argument in the sketch figure (1a), and the definitions of all quantities in the data figures (1b and 1c). We thank you for pointing to these shortcomings.\\n\\n*Weakness 2: Applications to data appear to be preliminary*\\n\\n*Weakness 3: Non-neuron aligned subspaces in PFC, what\\u2019s going on there?*\\n\\n*Relatedly: Question 2: What is the relationship between modularity and orthogonality?*\\n\\nWe agree with the reviewer, in the original submission the PFC section was slightly too preliminary, and we merged orthogonality and modularity too quickly. As such, we have moved these results to the appendix and replaced it with a section that focuses on how our theory impacts interpretations of the role of mixed selectivity from data, which we would be grateful if the reviewer considered. We thank the reviewer for prompting us to think more carefully about this, and we are currently running further analysis to address their and other concerns.\\n\\nHowever, regarding the entorhinal mixed selectivity results, we disagree with the reviewer and consider them thorough. We hope that our response to your next question will convince you of the thoroughness of the results.\"}", "{\"title\": \"Final Response\", \"comment\": \"Thank you very much for your feedback!\\n\\nWe will include the remaining parts of our responses to you in clarifying details in the paper for its final version.\\n\\nWe have unfortunately not been able to produce the reconstruction error simulations in time for the rebuttal period, so those will have to be included in a later version, both if it is accepted and if not. A final thought is that our image experiments in figure 2 already have a reconstruction loss element, and demonstrate that the main ideas seem to generalise to that setting.\\n\\nThank you again for your engagement with our work!\"}", "{\"title\": \"Final Response\", \"comment\": \"Thank you for your response!\\n\\nWe agree, more neuroscience data would help, and we're currently actively looking at the puzzling PFC single neuron responses. \\n\\nWe are pleased to say, however, that one of the discrepancies between our theory and the data has now been removed. Previously the numerical alignment between subspaces was different in data vs. theory. As we suggested, we played with the hyperparameters (changed the relative weighting of weight vs. activity loss from 1 to 0.3, and added a delay period of length 2 rather than 1) and the two now align.\\n\\nThank you again for your engagement with our work!\"}", "{\"summary\": \"This paper investigates why neural representations in biologically inspired networks sometimes form modular structures, where each neuron encodes a single variable, and other times create mixed-selective representations. The authors develop a theory that predicts when modularization will occur in networks optimized for energy efficiency with nonnegative firing rates. They derive necessary and sufficient conditions for modularity based on the spread of source variables. The theory is validated in both linear and nonlinear networks. The theory provides a cohesive explanation for the conflicting findings in the prefrontal cortex and entorhinal cortex data from neuroscience studies.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces a novel theory that precisely predicts necessary and sufficient conditions for modular representations in biologically inspired networks, extending previous work beyond statistical dependencies.\\n\\n2. The mathematical formulation is rigorously derived, and validated across various neural network architectures and experiments.\\n\\n3. The theory provides explanations for conflicting neuroscience findings and has close links to biologically plausible architectures and brain representations.\\n\\n4. The paper provides a cohesive theory for understanding modularity in neural representations, with implications for both interpreting biological neural data and guiding the design of artificial neural networks for better interpretability and efficiency.\\n\\n5. The paper is well-written, presenting complex theoretical concepts with clarity and intuition.\", \"weaknesses\": \"1. The experiments use nonnegative activities in neural networks, which aligns with biological plausibility, but it would be valuable to discuss inhibitory neurons in the brain and how inhibition might relate to the theory and findings.\\n\\n2. While the L2 norm of firing rates and weights is a reasonable approximation for biological energy, other biological constraints (e.g., sparse connectivity, synaptic range, anatomical structure, and decoding flexibility) may also play a role.\\n\\n3. It\\u2019s unclear how the theory would extend to more complex datasets. For example, what would the different conditions/variations in source variables mean in naturalistic data (e.g., natural images, audio, text)? How might we approximate \\\"spread\\\", and quantify modularity conditions in such stimuli?\\n\\n4. The discrepancies in prefrontal working memory modeling and the brain data could have further explanations, particularly why some neurons tune to both colors despite orthogonal encoding and why exact subspace angles were not obtained.\", \"questions\": \"1. Beyond prefrontal working memory and the entorhinal cortex, does the theory generalize to other modular representations in the brain?\\n\\n2. If decoding flexibility were considered as a biological constraint, how might it impact the theory and its predictions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Author Response 2\", \"comment\": \"*Weakness 4: No mapping of L2 costs to the actual costs biological neurons are trying to minimise*\\n\\nWe agree that it is not obvious that an L2 cost is the appropriate one. We are not dogmatic about this, and instead simply think that neural firing is costly. As stated in the main response, to show that our conclusions generalise to other ways in which you might choose to penalise the activity we include a new appendix which shows our main result (that support independence leads to optimal modular representations) generalises to other choices of norm. We could try and get more detailed about the exact mapping of spiking onto the metabolic currency (ATP), and indeed others, such as David Attwell, have invested significant effort in calculating these energy uses. However, these quickly become complex mathematically, we prefer for the moment to take a slightly more abstracted approach that we think is broad and nonspecific enough to apply independent of the precise details.\\n\\n*Question 1: What\\u2019s going on with biological linear RNNs?*\\n\\nThe biological indeed refers to the energy regularisation. Real biological RNNs are of course non-linear. Rather, our theory applies to linear RNNs with energy constraints, and we derive some interesting results regarding modularisation in these systems. Apologies for the confusion, we have tried throughout to instead refer to them as \\u2018linear RNNs with biological constraints\\u2019 \\n\\n*Question 4: Section 2.2 assumes positive weights?*\\n\\nApologies, yes, parts of the intuitive argument in section 2.2 rely on using positive weights and we now include that. Thank you for catching that.\\n\\n*Question 5: \\u2018be better\\u2019 is imprecise.*\\n\\nAgreed, corrected to \\u2018uses less energy\\u2019.\\n\\n*Question 6: Fig 1d neuron\\u2019s angle not described?*\\n\\nApologies, we\\u2019ve included a longer description in the caption to define this concept:\\n\\n> Here the y axis measures how mixed the representation is by quoting the largest angle between a neuron\\u2019s weight vector and one of the source axes.\\n\\n*Question 7: I don\\u2019t understand the what-where justification*\\n\\nApologies for the confusion. The what-where regression tries to test the qualitative trends outlined at the end of section 2 for when nonlinear representations of variables (the two variables here are what and where) should modularise. We therefore test three datasets. The first removes a corner from the joint range of what and where (i.e. position 1 (where) and shape 1 (what) don\\u2019t co-occur, so there is a point missing from the co-range of what and where). We then test how the modularity of the optimal representation varies as the size of the missing corner increases. The second test (green) changes the distribution of the data such that more data is drawn from the diagonal (e.g. what = 3 and where = 3 is more common than what = 2 and where = 3), finally the last test removes data from the diagonal (what = 5, where = 5) rather than the corners. We find qualitative trends that match the theoretical predictions. Removing data from the corners is more effective than both correlations that don\\u2019t change the range and removing diagonally positioned data for producing mixing for the same amount of induced source multiinformation.\\n\\n*Question 8: How relevant are these results for normal ANN stuff (language or images)?*\\n\\nWe would likely proceed much as we did in the last panel of figure 2, which is an application of our theory to a complex image dataset. Apologies if is was not clear this was a real image dataset, we now include text to highlight this. \\n\\n> We study the performance of QLAE trained to autoencode a subset of the Isaac3D dataset (Nie 2019), a naturalistic image dataset with well defined underlying latent dimensions.\\n\\nAs such, we would assume that the measured representation is encoding a particular set of latent variables, then reason about how the theory suggests the distribution of those sources latents might lead them to be optimally encoded in a modular or mixed fashion. This has many obvious weaknesses (what are the latents? What if it is not just encoding but is computing with the latents, won\\u2019t that affect things?), but presents a good start that is already predictive, as in figure 2.\\n\\nWe would like to thank the reviewer for their attention towards our work.\"}", "{\"title\": \"Author Response Part 3\", \"comment\": \"*Weakness 8: Ignores biologically relevant noise*\\n\\n*Relatedly - Question 1: What are the assumptions on noise?*\\n\\nOur model is indeed a very simple model, and purposefully so! It allows us to understand it in detail and extract surprising predictions that match neural data. Biological noise is likely an important concern. In our work we do not consider it, rather we consider the optimal noise-less tuning curve. Most work that uses artificial neural networks to predict neural activity makes a similar assumption, and it is interesting how well this appears to be working. Nevertheless, we think that noise is an important direction and have added the following to our limitations section to discuss this and other effects:\\n\\n> Additionally, our model is a purposefully simple model, there are many biologically pertinent effects whose impact on modularity would be interesting to study, such as connection sparsity, anatomical constraints, noise, or requiring additional computational roles from the network.\\n \\n*Question 4: What is going on with the matrix S?*\\n\\nUsing the matrix notation just summarises the system of equations from earlier proof, and helps build the other interpretation of the proof - when the ellipse (defined by S) is fully contained within the convex hull of the data or not. We have tried to rephrase the maths in that section to make it clear where that matrix comes from. In particular, we have added an extra row and column to the definition so the completion is clearer, moved its definition closer to its natural introduction, and highlighted where it pops up naturally. In short, the matrix is minus the covariance matrix of the sources with its diagonal removed and replaced by the square of the min of each of the sources. It defines the ellipse that governs the modularising behaviour.\\n\\n*Question 5: What\\u2019s up with corner cutting in the title*\\n\\nThe corner cutting was meant to be about how modularity can be turned off/on depending on whether the corner of the data distribution is cut off or not. We agree that this is very unclear! So we have changed the title to focus on the main point of the paper that generalises to other norms and multidimensional variables: \\n\\n> Range not statistical independence drives modularity in biological representations \\n\\n(despite the fact this is only a sufficient result, unlike the main result, which is also necessary)\\n\\n*Question 6: What\\u2019s up with \\u2018extracting conclusions\\u2019*\\n\\nSorry for the confusing wording. We simply meant that we looked for qualitative trends in the theory that we could test in the nonlinear settings. We have changed the wording to clarify this.\\n\\nFinally, we would like to emphasise the development of our work from that of Whittington et al. \\u201823. We assume no additional assumptions relative to that work, yet are able to derive much weaker conditions on the sources that lead to modularisation. It is only thanks to this weakening of conditions that we are able to understand modularisation in RNNs (fig 3), neural data from entorhinal cortex (now fig 4) and make subtle conclusions about potential sources of measured mixed selectivity (new fig 5). Further, our conditions are necessary and sufficient, and able to predict the degree of modularisation or mixing much more accurately than the work of Whittington et al. \\u201823.\\n\\nThank you again for your time and attention!\"}", "{\"title\": \"Reminder\", \"comment\": \"Thank you again for your valuable feedback. As the end of the discussion period is approaching, we are hoping to hear your thoughts on our response.\\nWe hope that we have addressed your comments, and we would greatly appreciate it if you were willing to consider increasing your score if you are satisfied with our response.\"}", "{\"title\": \"Author Response\", \"comment\": \"We thank the reviewer for their attention and reading of our paper. Below we try to address the concerns they raised.\\n\\n*Weakness 1: Contains too much!*\\n\\n*Weakness 2: Figure and text panels are small throughout*\\n\\nWe agree that the paper is full and apologise for any lack of clarity that results. To address this, where possible, we have increased the size of figure labels and figures throughout the paper. Further, we have moved the PFC results to the appendix, making the focus of the neuroscience results more in line with the main thrust of the paper, and more coherently centred on the entorhinal cortex. With this extra space we have significantly expanded various parts of the paper, in particular the section discussing the main theorem and providing intuition and interpretation. We hope these changes have somewhat improved the clarity.\\n\\n*Weakness 3: How relevant are extreme points of distributions for actual neuroscience?*\\n\\nThe reviewer is right. Our results depend on the extreme points in worrying ways. The extreme points are determined by outliers, and it seems strange for very unlikely outlying points to govern the behaviour of the optimal representations.\\n\\nThe reason this happens in our theory is that we require the representation to perfectly decode the labels for all datapoints. If, instead, the reconstruction loss is included as a term to be minimised alongside the activity and weight energy then this failure mode would be removed. Instead of being forced to encode all outlying points, the representation could choose to ignore an unlikely extreme point, paying a small reconstruction cost for a saving in energy. \\n\\nWe would like to theoretically understand this effect, however each of these theoretical developments takes time and we have not yet been able to develop this one. We are now working on empirical results to show this effect that we hope will be ready before the end of the discussion period, and if not by then, definitely by the time of the conference.\\n\\nThat said, the idea that what matters for modularisation is that the support of variables is independent even if the distribution is not, seems a more reasonable conclusion than alternatives for actual neuroscience. For example, testing whether two variables are statistically independent is difficult, but checking whether they are extreme-point independent just requires tracking whether you\\u2019ve ever seen the four corners of the distribution occur. Indeed, with range-independence we are able to explain neural data that no other theory can (figure 4). We are in the process performing neural experiments in rodents to test these conclusions, though of course this is beyond the scope of the present work.\\n\\n*Continued - neurons do not autoencode:*\\n*Question 3: Autoencoding is a limited objective, what\\u2019s up with it?*\\n\\nThis is certainly true. Neurons are doing far more than autoencoding. However, our goal here is not a particularly accurate model of any neural circuit performing a computation. Rather, we seek a simple model of mixed selectivity vs. modularity, and why neurons might choose one or the other. A minimal model has to force the neurons to encode the variables somehow. This could be done by making the neurons perform computations with those variables, or by producing a particular behaviour as the reviewer suggests. But then theoretical analysis will likely be hard and dependent on the simplified task that was studied. Instead we try to make minimal assumptions, and just study neurons forced to encode variables. We chose autoencoding because it doesn\\u2019t make any assumptions about what the variables are being used for, just that they are represented and decodable (linearly for theoretical analysis, empirically we drop this). That this simple setup is sufficient to produce interesting behaviour that matches neural responses further justifies this choice in our mind. This of course does not negate the fact that ideally we would study theories of neurons doing meaningful computations, and this is a target for our future work. We have included a sentence in the discussion to this effect:\\n\\n> Additionally, our model is a purposefully simple model, there are many biologically pertinent effects whose impact on modularity would be interesting to study, such as connection sparsity, anatomical constraints, noise, or requiring additional computational roles from the network.\"}", "{\"title\": \"Final PDF Change\", \"comment\": \"To briefly explain our final submission edit, we fixed a few typos, and changed some task parameters in order to make the PFC ersult numerically align with the data.\\n\\nHow aligned the two subspaces are in our PFC model networks depends on critical hyperparamters in the theory such as the weighting of weight vs. activity loss, and the length of the delay period. We played a little with these and found a set that agreed numerically with the data. The crucial prediction remains the same however: if the stimuli are range-independent then, regardless of these choices of hyperparameters, the subspaces are orthogonal. If they are range-dependent then the subspaces can align, and how much they do depends on hyperparameter choices in our theory.\"}", "{\"metareview\": \"This work tackles a timely subject in distributed computing in the brain: modularity. The work aims to provide a theoretical framework that can identify when and why modularity occurs. The theory itself builds off of the analysis of optimal encoding of a set of inputs, with the general constraints that the encoding must be \\\"simplest\\\" by minimizing the norms of the weights, and that the encoding variables are non-negative. Both assumptions are used to identify the key result, formulate the result in a way where comparing a convex hull of the data to a theory-driven ellipse results in an understanding of when inputs should be mixed or not. The paper itself is very dense with a significant appendix detailing much of the theoretical ideas and derivations. The primary result of the work was that the concept of data spread could be considered as important as statistical independence between variables that are encoded.\\n\\nThe reviewers mentioned a number of points where clarity could be improved, and the authors in response amended the manuscript and included additional analyses for extending the model and adding more experimental results. Some of the more central points might be more important to the assessment of this manuscript. One reviewer raised the question of why en encoder framework, independent of task requirements etc. is the right framework in which to view modularity. Similarly, the question from another reviewer that the model depends too much on the linear setting. I believe these concerns actually speak to a deeper assumption: what are the chosen inputs? There are multiple levels in which tasks and performance can be modularized or not. In some ways receptive fields are \\\"non modular\\\" in the space of localized point sources. The entire endeavor rests on definition of what is a \\\"single\\\" input and \\\"single\\\" processing unit (here chosen to be a single neuron). These are largely assumed and I think instantiated in a number of different questions the reviewers had in different ways. \\n\\nBeyond these concerns, another recurring theme was the length of the manuscript. It is true the authors have done much work, but there is also significant duplication of information. I agree that a more focused paper would likely fair much better for the shorter format here. \\n\\nIn all, while this is a borderline case, the reviewers in the end felt the merits outweighed the shortcomings and recommend accepting this work.\", \"additional_comments_on_reviewer_discussion\": \"The responses were pretty cut-and-dried.\"}", "{\"title\": \"Reminder\", \"comment\": \"Thank you again for your valuable feedback. As the end of the discussion period is approaching, we are hoping to hear your thoughts on our response.\\nWe hope that we have addressed your comments, and we would greatly appreciate it if you were willing to consider increasing your score if you are satisfied with our response.\"}", "{\"title\": \"Author Response Part 2\", \"comment\": \"*Question 2: What would happen if decoding flexibility were considered as another constraint?*\\n\\nUnfortunately, we\\u2019re not sure what the reviewer means by \\u2018decoding flexibility\\u2019, so feel badly placed to answer this question. The setting we consider is already somewhat flexible in that the readout can decode one variable independently of the behaviour of all others, so in that sense it is maximally flexible. Another definition of a \\u2018decoding flexibility\\u2019 might be more flexible decoders, i.e. nonlinear ones. That would imply a different set of constraints that lead to modularity, but is of the flavour that we tried to test with our nonlinear networks - there the decoders are arbitrary nonlinear networks, and yet we still find that our theory is predictive. Finally, we thought flexible decoders might mean flexibly decoding functions of the sources rather than just the sources themselves. In this case, it would depend on the precise implementation, but alone this does not seem sufficient to cause modularity: as long as the variables are linearly decoded you can decode all linear functions of the two variables, and a sufficiently flexible readout could predict all functions of the variables. Before commenting further it would be useful to understand more the type of flexibility the reviewer was interested in.\\n\\nWe thank the reviewer again for their time and attention.\"}", "{\"comment\": \"Thank you for the detailed and thoughtful responses. They have addressed most of my concerns, particularly regarding the scope of the theory, the inclusion of additional theoretical results, and clarifications on the naturalistic image dataset. Incorporating these responses into the updated manuscript will strengthen the paper.\\n\\nHowever, I still believe that the paper would benefit from more empirical neuroscience evidence, such as aligning the model more closely with PFC representations and including additional neural data to support the theory.\\n\\nI would increase my score to 7 if allowed.\"}", "{\"title\": \"Author Repsonse\", \"comment\": \"We would like to thank the reviewer for their careful, and largely positive, review of our work. Below we hope to address some of their concerns.\\n\\n*Weakness 1: Non-negative activities, what about inhibitory neurons?*\\n\\nThe reviewer is right that in our simple model we do not split between excitatory and inhibitory neurons. We do this as we are interested in the simplest possible account of modularity and so only use the fewest constraints. We see this as a real positive. Thus our theory is largely modelling how excitatory neurons (cortical pyramidal neurons) should encode variables as simply as possible. With that in mind, it is interesting that by focusing on excitatory neurons we can already get non-trivial results that match neural data. Nevertheless, we agree with you that incorporating inhibitory neurons may lead to further non-trivial situations in which modularity occurs - we leave this for future work.\\n\\n*Weakness 2: L2 is cool, but why no other biological constraints (sparse connectivity, synaptic range\\u2026)?*\\n\\nThis is a good question. Ultimately we are looking for the simplest set of biological constraints that lead to modularity. You\\u2019re right that to have the fullest picture of how biological constraints interact with representation, then including things like sparse connectivity and synaptic range would be great. But again, we\\u2019re looking for the simplest set of constraints that induce modularity. It is surprising that simple energy minimisation in a linear autoencoder got us this far, yet was quite hard for us to understand fully. Further understanding things like sparse connectivity and synaptic range is in our overall mission, but that is a big ambition and will be other papers. We now include a statement in the discussion to this effect:\\n\\n> Additionally, our model is a purposefully simple model, there are many biologically pertinent effects whose impact on modularity would be interesting to study, such as connection sparsity, anatomical constraints, noise, or requiring additional computational roles from the network.\\n\\nWith particular reference to the use of L2 activity loss (rather than other reasonable choices), in our new submission we include an additional theoretical result that shows that range-independence is sufficient to drive modularity for a wide range of choices of activity norm (new appendix C). \\n\\n*Weakness 3: How could the theory extend to more complex datasets, e.g. images and text*\\n\\nWe would likely proceed much as we did in the last panel of figure 2, which is an application of our theory to a complex image dataset. Apologies if this was not clear, we now highlight this in the main text:\\n\\n> We study the performance of QLAE trained to autoencode a subset of the Isaac3D dataset (Nie 2019), a naturalistic image dataset with well defined underlying latent dimensions.\\n\\nSo to proceed we would assume that the measured representation is encoding a particular set of latent variables, then reason about how the theory suggests the distribution of those sources latents might lead them to be optimally encoded in a modular or mixed fashion. This has many obvious weaknesses (what are the latents? What if it is not just encoding but is computing with the latents, won\\u2019t that affect things?), but presents a good start that is already predictive, as in figure 2.\\n\\n*Weakness 4: Why do the PFC results not match? Both in subspace angle and neuron alignment*\\n\\nYes, this is confusing. We have a lot of ideas for exploring this discrepancy (in brief: the numerical subspace alignment might be fixed by changing hyperparameters of the task; and we are diving into the original neural data to test some hypothesis about what might cause the mixing on a neural level). These directions require more work and, and so we have decided to move this section to the appendix and replace it with a discussion of other potential causes of mixed selectivity, beyond the currently predominant theories that our work suggests. Thank you for raising these issues.\\n\\n*Question 1: Does the theory generalise to other brain areas?*\\n\\nDespite moving the PFC section to the appendix, we remain hopeful that our work applies broadly to neural coding. We are interested in testing it in sensory and motor areas, as well as refining the removed PFC results. It is known that cells in parietal cortex, for example, that neurons are highly responsive for single tasks (Lee, Krumin, Harris, Carandini, 2022) and not others (they are modular), while in the prefrontal cortex there are modular neurons for things like the abstract structure of a sequence (Shima & Tanji 2007). However, in this study, and most other neuroscience studies, they do not have diversity in their task structure to test for concepts like range independence. We are actively collaborating with experimentalists to run studies which directly test our theoretical claims in a variety of brain regions, and we hope to be able to answer this question in the coming years.\"}", "{\"comment\": \"I thank the authors for their detailed and helpful responses, they have clarified a number of my concerns. I think the changes to the paper strengthen it, making its contributions both clearer and more impactful to the field. As such I believe this paper should be accepted and I have increased my score accordingly.\", \"reasons_for_not_increasing_the_score_further\": [\"I still have some concerns regarding the general applicability and importance of the extreme points outside of a perfect linear reconstruction constraint. I look forward to the authors' upcoming work on this.\", \"I am unable to award a 7.\"]}", "{\"title\": \"Reminder\", \"comment\": \"Thank you again for your valuable feedback. As the end of the discussion period is approaching, we are hoping to hear your thoughts on our response.\\nWe hope that we have addressed your comments, and we would greatly appreciate it if you were willing to consider increasing your score if you are satisfied with our response.\"}", "{\"summary\": \"This paper studies the conditions under which modularity should arise in optimal representations. The authors developed a mathematical theory of when a modular representation should be favored in a linear autoencoder. They found that modularity should appear when the support of the sources is \\u201csufficiently spread\\u201d. The paper also presented simulation results to show that some of theoretical results may be generalizable to non-linear problems. The later sections of the paper applied these theoretical ideas to explain some experimental observations from neurophysiological experiments in cognitive tasks.\\nThe paper makes several interesting points about when modularity should be favored, and applies the theory to several examples.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Originality: The theoretical part of the work builds up prior work by Whittington et al, 2023 and other studies. Previous work by Whittington et al, 2023 assumed mutual independence of the sources. In the current work, the authors show that, with several additional assumptions, \\u201csufficient spread\\u201d of the factors of variation can also lead to modular representation. The theory has some new elements, although it is a bit incremental. The application to the several neuroscience problems seems to be new.\", \"quality\": \"The paper considered both linear and nonlinear cases. This is a strength. For the former, analytical results were provided. For the latter, some preliminary numerical results were given.\\nThe paper also considered several neuroscience applications. This may also be seen as a strength.\", \"clarity\": \"The overall structure of the paper is clear. Some intuitions behind the theory were provided.\", \"significance\": \"The question of when modularity arises in optimal representation is an interesting one and we still lack a clear understanding. This work made a few interesting points on this problem.\", \"weaknesses\": \"The writing needs improvements throughout the paper. In particular, the description of the theory can be substantially improved. For example, Theorem 2.1 should be made more accessible.\\n\\n\\nWhile several applications are attempted, each application appears to be preliminary. If the model predictions and experimental tests can be made more rigorous, that would strengthen the paper.\\n\\nIn Section 5, there are some qualitative differences between the model predictions and the data. As the paper pointed out, Panichello and Buschman (2021) showed that a substantial fraction of the neurons were tuned to both colors, contradicting a key prediction of the model. This seems to be a more important feature of the data compared to the issue of orthogonality v.s. non-orthogonality.\\n\\nLooking at the math, the theory appears to only work for scalar variables. Can it be applied to circular variables? If the answer is no, the applications to real data would be questionable. In section 5, color is sampled from a color-wheel. \\n\\nBased on the way the theory is written, the results seem to rely the assumption that the energy (or cost) is a quadratic function of neural activity. If the cost scales linearly with neural activity, would the theoretical results change fundamentally? Assuming a linear scaling could make sense biologically, as the metabolic cost may scale linearly with the number of spikes. \\n\\nRelevant earlier theoretical literature on grid cell modularity was not cited/discussed (e.g., by Fiete/Burak et al, and Wei/Prentice/Balasubramanian).\\n\\n\\nIt is difficult to understand what is really going on in Fig. 1. Fig.1 may come out too early and the results need to be better unpacked. \\n\\n\\nThe theory seems to ignore biologically-relevant noise.\", \"questions\": \"What are the assumptions about the noise in the system being studied? How does noise affect the theoretical results?\\n\\nIn section 5, the authors seems to be equating orthogonality with modularity. Am I understanding this correctly? If this is the case, can the authors unpack the idea?\\n\\nCan they authors unpack the results in Fig. 5c? Was that an actual simulation or just a schematic?\\n\\nThe definition of the matrix in Eq. 10 is unclear. Please clarify. \\n\\n\\nCan the authors explain what the first part of the title mean?\\n\\nIn the abstract, it is stated that \\u201cFrom this theory, we extract and validate predictions\\u2026\\u201d What does \\u201cextract\\u201d predictions mean? [minor point]\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Comment\", \"comment\": \"We\\u2019d like to thank all the reviewers for their time and attention to our work, and for their enthusiasm. Their comments have helped us to improve and sharpen the paper.\\n\\nIn general the reviewers praised the rigorous analytical results (a7qJ, 1fwm), empirical tests in nonlinear networks (a7qJ, Gn2p), links to neuroscience (a7qJ, Gn2p), clarity of presentation (a7qJ, Gn2p), and choice of question (1wfm). In addition, the reviewers shared concerns that we have tried to address. To highlight the largest changes in the new submission, we have:\\n1. Moved the section on PFC to the appendix\\n2. Extended the entorhinal section to comment on potential sources of mixed-selectivity\\n3. Significantly clarified the main theorem presentation.\\n4. Added more theorems that show our core results generalise to other choices of activity norms and multidimensional latents.\\n\\nWe now run through the shared concerns in more detail.\\n\\n**PFC DATA** First, there was concern that, of the two comparisons to neural data, the PFC comparison was weaker. The original submission contained both a numerical mismatch of the subspace alignment between the theoretical and empirical representations (a1qJ), and the neural level modularisation results do not at first blush agree with our theory (1fwm, a1qJ). Further, we previously elided concepts of orthogonality and modularity in confusing ways (1fwm). While we have lots of thoughts for how to improve and develop these results, we largely agree with the reviewers on these points and have concluded that, at this stage, the results are slightly too preliminary.\\n\\nAs a result we have moved the section on the PFC to the appendix, and have replaced it with a new section that expands how our theory contributes to the ongoing debate over the role of mixed selectivity in the brain. This highlights how our theory leads to multiple additional interpretations of mixed selectivity of neurons, beyond the prevalent view that mixed selectivity exists to enable flexible downstream categorization (Rigotti et al. 2013).\\n\\n**Overly Simplistic Model** Second, many reviewers pointed to the simplicity of our model, asking about including inhibitory neurons (a1qJ), other biological constraints (sparse connectivity, synaptic range, anatomical structure)) (a1qJ), biological noise (1fwm), and our focus on an autoencoding objective (Gn2p). These are all reasonable details to include. However, in contrast, we see the simplicity of our setting as a good thing! We are interested in understanding why neurons sometimes choose to mix their representations, and other times not. To do this we build the simplest model that can capture the observed phenomena. In order to make the representation encode the variables we use autoencoding, which implies minimal additional structure. Then we find that just energy minimsation leads us to rich phenomenology capable of explaining neural data in ways that were not previously possible, making testable experimental predictions. Each of these other effects would definitely be interesting to study, and other works have used such effects to derive interesting modularity conclusions (such as Liu, Khona, Fiete, & Tegmark. 2023), but with just these simplest set of constraints we can go surprisingly far!\\n\\n**L2 Activity Norm** Some particular concerns reviewers pointed to were the use of the L2 activity norm rather than some other norm (1fwm, Gn2p), and the limitation to scalar variables (1fwm). In our updated version we present new results that show that our core result (that support independence drives modularisation) generalises to other choices of activity norm (Appendix C in new submission) and to multidimensional variables (Appendix D). In particular, this means that the particular choice of activity norm is not vital, as long as the simple idea that spiking is energetically costly is used.\\n\\n**More Complex Tasks** Finally, reviewers wondered whether our work generalised to image or language tasks (a7qJ, Gn2p). First, we point out that figure 2 included experiments performed on a standard image dataset, Isaac 3D, and we now highlight this in the paper. This gives a model for how our work might be applied to these settings. I.e. if you think a particular representation contains information about a set of variables, for example the latent factors in Isaac 3D, then our theory gives tools to think about what properties of those variables will determine whether they are represented in a modular or mixed fashion. We are actively trying to use these results to think about modularisation in more complex tasks, such as in LLMs, by making guesses about the encoded latent variables, and seeing whether we can predict the modularity of their representation from the support properties. This, however, will be for future work, as it involves significant extra work.\\n\\nOverall, we would like to thank the reviewers for their time and effort, and look forward to hearing what they have to say in response.\"}", "{\"summary\": \"This paper seeks to explain why and when a population of biological or artificial neurons sometimes modularise and sometimes entangle the representation of source variables. This is a fundamental question that is highly relevant to both neuroscience and AI. The authors propose and prove a new theory emphasising the importance of the shape of empirical data distribution in extreme regions in dictating whether neurons are mixed selective or modular. Specifically if the sources to be represented are supported in all extreme regions, the neurons modularise. The application of this theory outside of linear autoencoders is tested in feed forward and recurrent neural networks, including experiments that provide explanations for discrepancies in previous neuroscience literature.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This work is original, of high quality and undoubtedly contributes to the community\\u2019s understanding of neural modularisation. The nonlinear verification of theory and additional application to neuroscience results are significant for the field and a strength of the paper. The submission is well written and clear throughout, although its clarity suffers somewhat due to the amount this submission seeks to cover.\", \"weaknesses\": [\"In my opinion this submission contains too much, and would benefit from more focus and time spent on fewer experiments. The appendix is already large but some experiments could be moved there.\", \"Figure text and panels are too small throughout.\", \"It is not clear how relevant encoding of the extreme points of source distributions are for computation / cognition. I.e. neurons do not just autoencode.\", \"The bio description of energy minimisation assumes l2 penalty is an appropriate penalisation function for modelling biology. This is a fair starting assumption, but no argument is presented about how this maps to the costs biological neurons will be seeking to minimise.\"], \"questions\": [\"Biologically inspired linear rnns are repeatedly mentioned. Linear rnns are less like biological circuits (which are nonlinear). Is the biologically inspired term referring to the energy costs?\"], \"150\": [\"satisfied for all w is a little unclear. The proof could\", \"Autoencoding seems limited as an objective for theory, in that brains perform computations over inputs and states and produce behaviour. Is it not the relevance of representing the extreme points for behaviour that is important?\", \"Section 2.2 assumes positive weights?\", \"120 \\\"be better\\\" is imprecise\", \"Fig 1d, neuron\\u2019s angle isn\\u2019t described?\", \"I don\\u2019t understand the what where regression justification.\", \"How relevant are these results for more typical ANN experiments? E.g classic image benchmarks or language modelling tasks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response Part 2\", \"comment\": \"*Q3: Can the authors unpack the results in Fig. 5c? Was that an actual simulation or just a schematic?*\\n\\nYes! Figure 5c (figure 4c in new submission) is a real simulation result! We trained an RNN model to output an agent\\u2019s current position and its displacement from reward as an agent moved around a 2D grid world. We found that, in line with the theory, when reward and position are range independent (i.e. when knowing the value of one variable does not constrain the allowed set of values for the other variable, this is true if reward is at least partially randomly scattered, since then reward could occur in all positions) their representation was perfectly modular, whereas when their ranges are not independent (some positions are never rewarded) the representations mix. This matches all known mixing and modularity results from entorhinal cortex - results that could not be explained before our theory. As such we consider these results complete and a ringing endorsement of the theory. In fact, we consider the theoretical side of the work so complete that we are moving on to testing them experimentally, in new neural recordings (though this is of course beyond the scope of this paper). Apologies for this not being clear.\\n\\n*Weakness 4: Theory only works for scalar variables, but PFC example is 2D at least, what\\u2019s up with that?*\\n\\n*Weakness 5: Seems to rely on L2 scaling of energy. If scaling is linear does stuff change massively?*\\n\\nWe thank the reviewer for pointing our attention to these shortcomings. In our new submission we include two new proofs that show that our core results generalise to cover both of these cases. First, in Appendix C, we show that support independent variables are optimally modular if you use any Lp norm as the activity loss for p > 1, and if p = 1 then the modular solution is optimal, but there might be other non-modular but orthogonal solutions that are equally good. Further, in Appendix D, we show that the same support independent results apply to multidimensional variables, i.e. a set of support independent multidimensional variables should also be modularised, again for all choices of Lp activity norm. We hope that this convinces you of the generality of our theory.\\n\\nRegarding the appropriate choice of activity norm we acknowledge that it is unclear. L1 expresses the idea, as the reviewer says, that counting spikes might be the reasonable cost. L2 instead, expresses the intuitive idea that an increase of spiking of 10Hz when you\\u2019re already spiking at 100 Hz might be more costly than the same increase from 0Hz. We\\u2019re not too dogmatic about which is the best choice, so we are thankful that our main point, that support properties are the drivers of modularity, extend to all such reasonable choices of norm.\\n\\n*Weakness 6: Doesn\\u2019t cite relevant work on grid cell modularity*\\n\\nWe thank the reviewer for their suggestion. Both the papers you mentioned hard code grid cell modularity, and so do not examine what the necessary conditions\\u2014either in the data distribution or in biological constraints\\u2014are to observe modularity or not. These papers however address different questions: Burak & Fiete provides a computational model of how multi-modular grid cells might path-integrate (e), whereas (Wei/Prentice/Balasubramanian and similar work by Stemmler) shows how multi-modular codes are particularly good for spatial coding and discuss how various aspects of the code that might be optimised to store spatial information. For our purposes, it is important to note that neither of these works (or others) normatively explains why grid cells form modules, over any other coding scheme, they just show that such a scheme is good. In our work, instead we show why grid modules align with the neuron basis, something assumed in all other works.\", \"we_have_added_this_discussion_to_our_relevant_work_section\": \"> One of the many surprising features of grid cells is their modular structure: each grid cell is a member of one module, of which there are a small number in rats. Grid cells from the same module have receptive fields that are translated versions of one another (Stensola et al., 2012). Previous work has built circuit models of such modules showing how they might path-integrate (Burak and Fiete, 2009), or has assumed the existence of multiple modules, then shown that they form a good code for space, and that parameter choices such as the module lengthscale ratio can be extracted from optimal coding arguments (Mathis et al., 2012; Wei et al., 2015). However, neither of these directions shows why, of all the ways to code space, a multi-modular structure is best. Dorrell et al. (2023) study a normative problem that explains the emergence of multiple modules grid cells as the best of all possible codes, but their arguments for the emergence of multiple grid modules are relatively intuitive. In this work we are able to formalise parts of their argument, linking it cohesively to theories of modular and mixed coding.\"}" ] }
BxLK1M1f8T
Double Check My Desired Return: Transformer with Value Validation for Offline RL
[ "Yue Pei", "Hongming Zhang", "Chao Gao", "Martin Müller", "Mengxiao Zhu", "Hao Sheng", "Haogang Zhu" ]
Recently, there has been increasing interest in applying Transformers to offline reinforcement learning (RL). Existing methods typically frame offline RL as a sequence modeling problem and learn actions via Supervised learning (RvS). However, RvS-trained Transformers struggle to align actual returns with desired target returns, especially when dealing with underrepresented returns in the dataset (interpolation) or missed higher returns that could be achieved by stitching sub-optimal trajectories (extrapolation). In this work, we propose a novel method that Double Checks the Transformer with value validation for Offline RL (Doctor). Doctor integrates the strengths of supervised learning (SL) and temporal difference (TD) learning by jointly optimizing the action prediction and value function. SL stabilizes the prediction of actions conditioned on target returns, while TD learning adds stitching capability to the Transformer. During inference, we introduce a double-check mechanism. We sample actions around desired target returns and validate them with value functions. This mechanism ensures better alignment between the predicted action and the desired target return and is beneficial for further online exploration and fine-tuning. We evaluate Doctor on the D4RL benchmark in both offline and offline-to-online settings, demonstrating that Doctor does much better in return alignment, either within the dataset or beyond the dataset. Furthermore, Doctor performs on par with or outperforms existing RvS-based and TD-based offline RL methods on the final performance.
[ "Offline Reinforcement Learning", "Transformer" ]
Reject
https://openreview.net/pdf?id=BxLK1M1f8T
https://openreview.net/forum?id=BxLK1M1f8T
ICLR.cc/2025/Conference
2025
{ "note_id": [ "to0QhPOxSS", "sisrYizuFP", "s9VlHTelO4", "jcwfsPcuni", "iJXPvCgUI9", "h8tWIi2jYx", "fX9WfpmTov", "YDoahPwlsz", "VqScOqqOMt", "UE9RvnmTWl", "TFgVfjRXKB", "NmnmBI6tWv", "JRjojK0vQW", "Hwmux6w4Qz", "E1prUT1JTC", "DCu7kMW04w", "BTOlkhS6Of", "6EI12PNkLO", "6AXDZpp8zJ", "483hiGSIUA" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732205207505, 1732729501173, 1732208340931, 1737524255370, 1732620514169, 1731079593615, 1732685587713, 1733525960004, 1732730471696, 1729326892253, 1732642326160, 1732206881625, 1732205659879, 1732246842368, 1732242980161, 1730187605083, 1732204565110, 1730475543514, 1732676589776, 1732207133890 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13368/Authors" ], [ "ICLR.cc/2025/Conference/Submission13368/Authors" ], [ "ICLR.cc/2025/Conference/Submission13368/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13368/Reviewer_LBGr" ], [ "ICLR.cc/2025/Conference/Submission13368/Reviewer_r5YX" ], [ "ICLR.cc/2025/Conference/Submission13368/Reviewer_r5YX" ], [ "ICLR.cc/2025/Conference/Submission13368/Area_Chair_djAz" ], [ "ICLR.cc/2025/Conference/Submission13368/Authors" ], [ "ICLR.cc/2025/Conference/Submission13368/Reviewer_PDqo" ], [ "ICLR.cc/2025/Conference/Submission13368/Authors" ], [ "ICLR.cc/2025/Conference/Submission13368/Authors" ], [ "ICLR.cc/2025/Conference/Submission13368/Authors" ], [ "ICLR.cc/2025/Conference/Submission13368/Authors" ], [ "ICLR.cc/2025/Conference/Submission13368/Reviewer_PDqo" ], [ "ICLR.cc/2025/Conference/Submission13368/Reviewer_p2GX" ], [ "ICLR.cc/2025/Conference/Submission13368/Authors" ], [ "ICLR.cc/2025/Conference/Submission13368/Reviewer_LBGr" ], [ "ICLR.cc/2025/Conference/Submission13368/Reviewer_p2GX" ], [ "ICLR.cc/2025/Conference/Submission13368/Authors" ] ], "structured_content_str": [ "{\"title\": \"Part2\", \"comment\": \"**Questions:**\\n\\n1.Have you test your method in some robotics benchmarks?\", \"re\": \"Since Doctor processes one batch at a time thus the computational overhead is relatively low as shown in Table. Please refer to the respond to the weakness.\"}", "{\"comment\": \"Thanks for your detailed feedback.\\n\\nWe have updated the latest version of the paper, where we have explained the necessity of the proposed value alignment method in the Introduction section. Additionally, we have added discussions of relevant papers into the Related Work section. \\n\\nWe acknowledge that the current version lacks a direct comparison with other return alignment methods. However, we believe that the experiments presented in Figure 3 clearly demonstrate the superiority of Doctor in terms of return alignment. Specifically, Doctor ensures better interpolation between underrepresented returns in the dataset (as shown on the left side of the red line) and extrapolates to higher returns to some extent (as shown on the right side of the red line). This leads to superior alignment performance compared to previous RvS methods (closer adherence to the ideal line).\\n\\nWe sincerely hope that our response can address your concerns.\"}", "{\"title\": \"Part 1\", \"comment\": \"**Weaknesses:**\\n> 1. The experimental results is insuffcient on showing the stitching capability of Doctor.\\n> While stitching is one of the main reason of integrating TD-learning to SL, the performance improvement compared to MTM (which is a pure RvS method) is incremental (708.6 vs 719.7), according to Table 1.\\n> Generally, D4RL MuJoCo Gym environments is not a good benchmark to evaluate stitching capabilities.\", \"re\": \"Yes, our target return is set based on the n times maximum return from the dataset as used by DT. Specifically, it is calculated as:\\ntarget_return = min_return + (max_return-min_return)\\n\\nWe hope these additional experiments and explanations address your concerns. Thank you for your valuable suggestions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for the authors's response. I would like to keep my rating.\"}", "{\"summary\": \"The paper introduces a novel method called \\\"Doctor\\\" that integrates supervised learning (SL) and temporal difference (TD) learning to improve the alignment of predicted actions with desired target returns in offline reinforcement learning (RL). The method uses a bidirectional transformer and a double-check mechanism during inference to validate actions with value functions, ensuring better alignment and improved performance. The work is evaluated on the D4RL benchmark, demonstrating competitive performance compared to existing RvS-based and TD-based methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The core idea of the paper is interesting and well-motivated, addressing the challenge of aligning predicted actions with desired returns in offline RL.\\n2. The paper is well-written and easy to follow, making it accessible to readers with a background in RL and transformers.\\n3. The experimental results on the D4RL benchmark show that the proposed method, Doctor, outperforms several baselines in some tasks, indicating its potential effectiveness.\", \"weaknesses\": \"1. Model Choice (Bidirectional vs. Causal Transformer): The use of a bidirectional transformer, rather than a causal transformer, is not fully justified. Since inference is typically performed in a causal manner, it would be beneficial to provide more insight into why a bidirectional transformer was chosen and how it impacts the model's performance during inference.\\n2. Limitations Discussion: The paper lacks a detailed discussion of the limitations of the proposed work. It would be valuable to acknowledge potential drawbacks or scenarios where the method might not perform as well, setting realistic expectations for future research.\\n3. Inference Complexity: The inference complexity of the proposed method may be significantly higher than that of DT. A detailed comparison of the inference speed and computational overhead relative to other models would strengthen the paper.\\n4. Performance and Overhead: While the performance is competitive, the gains are marginal compared to the baselines. Given the higher computational cost, a more in-depth analysis of the trade-offs between performance and computational resources is necessary.\\n5. Evaluation Scope: The evaluation is conducted only on the D4RL benchmark, which may introduce a bias in the results. To ensure the robustness and generalizability of the proposed method, it would be valuable to conduct additional experiments on other offline RL benchmarks.\\n6. Reward Modeling Assumption: if I understand correctly, the underlying assumption is that value prediction is more general/robust than reward modeling for aligning the ground-truth reward during inference, which needs further theoretical justification. An analysis or explanation supporting this claim would add depth to the paper.\", \"questions\": \"1. Have you test your method in some robotics benchmarks?\\n2. Can the inference process be accelerated?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the response. I increase the score to 6. Good luck.\"}", "{\"metareview\": \"This work proposes a method called Double Checks the Transformer with value validation for offline RL. This method integrates supervised learning (SL) and temporal difference (TD) learning to improve the alignment of predicted actions with desired target returns in offline reinforcement learning (RL). The work is evaluated on the D4RL benchmark, demonstrating competitive performance compared to existing RvS-based and TD-based methods.\\n\\nAfter the rebuttal and discussion, this paper receives review scores of 3, 5, 6, 6. Reviewers raised several concerns:\\n1) lack of comparisons with recent methods\\n2) The performance on D4RL is not significantly better than other methods. \\n3) The motivation for aligning with the target return is not adequately justified in the current version of the article.\\n4) a lack of ablation study\\n5) novelty is limited.\\n\\nThere are discussions between the reviewers and the authors. The reviewers who gave negative comments both replies and weren't fully satisfied with the responses from the authors. AC has checked the submission, the reviews, the rebuttal, and discussion. AC sided with the negative reviewers and believed this work needs further effort to revise and improve the method performance. Thus, a rejection is recommended.\", \"additional_comments_on_reviewer_discussion\": \"There are discussions between the reviewers and the authors. Two reviewers are not fully convinced; the expert reviewer with a 3 score gave the following reasons to argue against accepting this paper. He or she eventually championed on the rejection.\\n\\nIn terms of final task performance, the proposed method does not outperform the latest state-of-the-art approaches. While the author highlights the motivation of aligning with the target return, the necessity of this motivation is not adequately justified in the current version of the article. It is recommended that the next version provide further explanations or experimental evidence to substantiate the importance of this alignment. Additionally, there is a lack of comparison with other alignment methods, particularly in terms of alignment itself, rather than solely focusing on final performance. For these reasons, I have maintained the current score.\"}", "{\"comment\": \"Thank you for increasing the score. We appreciate your feedback.\"}", "{\"summary\": \"RvS-trained Transformers lacks 1) the capability to interpolate between underrepresented returns in the dataset, and 2) to stitch information from multiple sub-optimal trajectories into a better one, due to its supervised nature.\\nThe paper introduces Doctor, which aligns the return of the policy with the target return, by selecting the action that has the nearest Q function (learned via expectile regression, similar to IQL) to the target returns.\\nDoctor shows much better return alignment across various target returns, and pars with / outperforms other RvS methods in terms of performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"1. Doctor shows much better return alignment compared to previous RvS methods.\", \"Experimental results in Figure 1, Figure 3 shows that Doctor shows much better alignment.\", \"This alignment can extrapolate for some datasets (i.e. being better than the dataset).\", \"2. The algorithm of Doctor is simple and intuitive.\", \"The alignment happens in the inference time, and the algorithm is straight-forward (choosing the action with the closest returns)\", \"Combines SL and IQL-like loss, which does not need to query out-of-distribution actions.\"], \"weaknesses\": [\"1. The experimental results is insuffcient on showing the stitching capability of Doctor.\", \"While stitching is one of the main reason of integrating TD-learning to SL, the performance improvement compared to MTM (which is a pure RvS method) is incremental (708.6 vs 719.7), according to Table 1.\", \"Generally, D4RL MuJoCo Gym environments is not a good benchmark to evaluate stitching capabilities. Experimental results in D4RL Maze2D tasks or AntMaze tasks, which are designed for stitching, will be helpful on evaluating the stitching capabilities of Doctor.\", \"2. The experimental results is insufficient on showing that \\\"better alignment in returns help offline-to-online fine-tuning\\\".\", \"Experiment results in Table 2 shows that the performance increase during online fine-tuning is larger for ODT. It looks counterintuitive, considering superior alignment capability of Doctor.\", \"Again, D4RL MuJoCo Gym environments might be not a good choice to evaluate offline-to-online fine-tuning. For example, Cal-QL[1] uses AntMaze, Kitchen, Adroit tasks for their experiments; experimental results on these environments might better highlight the superiority of Doctor.\", \"[1] Mitsuhiko Nakamoto et al., \\\"Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning\\\", NeurIPS 2023.\"], \"questions\": \"Q1. Does having better return alignment helps the final performance?\\n* i.e., can the problem of return alignment be solved by just having high target returns? (e.g. DT[2] uses 5x maximum returns in the dataset)\\n* It will be insightful if there are experimental results for ablating the inference-time alignment (with properly tuned target returns, e.g. 1x or 5x maximum return of dataset).\\n\\nQ2. Does having better return alignment helps online fine-tuning?\\n* It will be exciting if Doctor is able to show clear improvements in online fine-tuning experiments on AntMaze or Kitchen or Adroit tasks, where offline-to-online methods (e.g. Cal-QL) have showed significant improvements.\\n\\nQ3. Other than Q1 (final performance) and Q2 (online fine-tuning), can you share your ideas on why having better return alignment is beneficial, if there is any?\\n* e.g., does it help reducing the effort of searching for the target return used in inference?\\n\\nQ4. Does Doctor improves stitching capability?\\n* Experimental results in D4RL Maze2D tasks or AntMaze tasks, which are specifically designed for stitching, would be informative to assess the stitching capability of Doctor.\\n\\nQ5. What target return is used for evaluation?\\n* Can you share how did you find it? (e.g. is it the maximum return from the dataset?)\\n\\n\\n[2] Lili Chen et al., Decision Transformer: Reinforcement Learning via Sequence Modeling, NeurIPS 2021.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Global Response**\\n\\nWe sincerely thank all the reviewers for their valuable feedback on our work.\\n\\nTo address common questions and share additional experimental results, we have prepared a global response covering the following key aspects:\\n\\n**1: Additional experimental results**\\n\\nTo further demonstrate the effectiveness of our method, we have expanded our evaluation to include additional benchmark tasks. Specifically, we have added three Maze2D tasks and three Adroit tasks in the offline setting as well as three Adroit tasks in the offline-to-online setting.\\n\\nWe have also compared our method with additional baselines, including the state-of-the-art method [1]. Furthermore, we provide additional visualizations of the alignment experiment [776-790], along with evaluations of inference time and training overhead [847-856].\\n\\nAll new results are included in the revised paper.\\n\\n[1] Shengchao Hu, et al. Q-value Regularized Transformer for Offline Reinforcement Learning. 2024 ICML.\\n\\n**2: Why Doctor acheives on par performance in some tasks?**\\n\\nWe would like to emphasize that our primary goal is to align actual returns with desired target returns, as demonstrated in Figure 3. The improvement in final performance is merely a byproduct of this alignment and not a direct optimization target. From this perspective, it is not surprising that Doctor does not significantly outperform other baselines that focus on policy improvement.\", \"we_would_like_to_emphasize_our_motivation_and_novelty_as_follows\": \"**Motivation**: Previous works focus solely on improving final performance. In contrast, our work aims to extend this to a range of desired returns, presenting a new and more challenging task. Achieving this allows us to extract various levels of performance policies, which is particularly valuable for scenarios like game AI, where NPCs with diverse skill levels are essential for creating balanced and engaging gameplay.\\n\\n**Novelty**: The core novelty of our work lies in the double-check mechanism, which combines self-supervised learning and value function learning. This enables the model to align predicted values with target returns more accurately while also enhancing its ability to fine-tune through further online exploration. Our method is the first to combine self-supervised learning and value function learning in this manner, and we believe this is a significant contribution to the field.\\n\\n**3: Revised paper**\\n\\n\\nWe have incorporated the reviewers' suggestions into the revised paper and uploaded it, with all major updates highlighted in blue for your convenience. The key revisions are as follows:\\n\\n1) We have provided additional justification of value alignment (in appendix A).\\n\\n2) Our experiments have been extended, comparing with additional baselines across more benchmark tasks (in appendix C).\\n\\n3) We have included new visualizations for the alignment experiment, along with evaluations of inference time and training overhead (in appendix D.3).\\n\\n4) More related papers have been cited.\\n\\nThank you again for your feedback. We are looking forward to hearing your thoughts.\"}", "{\"title\": \"Part 1\", \"comment\": \"**Weaknesses:**\\n\\n>1. The paper lacks a comparison with state-of-the-art methods.\", \"re\": \"In our method, the action-value function is trained via implicit Q-learning [1], an in-sample multi-step dynamic programming method that excels at stitching sub-optimal trajectories. We further enhance the stitching capability by incorporating our double-check mechanism, thereby reducing the risk of generating unrealistic outcomes.\\n\\n[1] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. \\\"Offline reinforcement learning with implicit q-learning.\\\" arXiv preprint arXiv:2110.06169 (2021).\"}", "{\"title\": \"Part 1\", \"comment\": \"**Weaknesses:**\\n\\n>1. I am not sure about the novelty as self-supervised learning and value function learning are two common techniques.\", \"re\": \"We explored the impact of different target return sampling sizes N in Figure 4, as N increases, Doctor achieves better\\nalignment with the given target return.\\nWe also included the results for different transformer architectures in Figure 3, demonstrating that the BiT-based transformer (MTM) is a better choice than causal transformers (DT).\\n\\n>3. The performance over the baselines on different tasks seems inconsistent, as shown in Table 1.\\n\\nIt is unrealistic to expect Doctor to outperform baselines on every individual task. This is consistent with observations in prior work, where different methods excel in different scenarios due to varying task characteristics. Furthermore, we would like to emphasize that our primary goal is to align actual returns with desired target returns. The improvement in final performance is merely a byproduct of this alignment.\\n\\n\\nWe hope these additions and clarifications address your concerns.\"}", "{\"comment\": \"We appreciate your suggestion to further clarify the motivation. We will incorporate this in the next revision.\\n\\nThank you again for your valuable feedback and for increasing the score.\"}", "{\"comment\": [\"Thank you for the detailed response and additional experimental results.\", \"R1. Concern about Evaluation\", \"First, experiments in Appendix C.1 (especially Maze2D) clearly shows the stitching capability of Doctor.\", \"It **addresses the concern** on the evaluation of Doctor for its stitching capability.\", \"R2. Concern about Motivation\", \"For the response on the motivation, I agree that return alignment itself is an important problem.\", \"However, I still feel this point is not clear in the paper.\", \"**I strongly recommend emphasizing (clarifying) this motivation in the introduction section.**\", \"Based on the responses, I have increased the score to **6**.\"]}", "{\"summary\": \"This paper introduces Double Checks the Transformer with Value Validation for Offline Reinforcement Learning (Doctor), a framework designed to address alignment issues between actual returns and desired target returns, especially in cases involving underrepresented or higher returns. Specifically, this approach integrates additional temporal-difference (TD) learning into the Transformer architecture, sampling actions around desired target returns and validating them through value functions. Experiments conducted in both offline and offline-to-online settings demonstrate improved alignment with target returns, showcasing the effectiveness of the proposed approach.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy-to-read.\\n2. The motivation is clearly presented, focusing on addressing alignment issues within sequence modeling methods.\", \"weaknesses\": \"1. The paper lacks a comparison with state-of-the-art methods, such as RADT[1], which also addresses target return alignment, and QT[2], which enhances stitching capability by incorporating the TD method.\\n2. The performance improvements in D4RL are not significantly better than these baselines, diminishing the perceived effectiveness of the proposed framework. Although this work integrates the TD learning component into the MTM network, the improvement over MTM is marginal. Furthermore, during online fine-tuning, the performance gains are also less pronounced compared to the ODT baseline.\\n\\n[1] Tsunehiko Tanaka, et al. Return-Aligned Decision Transformer. 2024\\n\\n[2] Shengchao Hu, et al. Q-value Regularized Transformer for Offline Reinforcement Learning. 2024 ICML.\", \"questions\": \"1. Does interpolating between underrepresented returns and stitching information address the same issue? While stitching sub-optimal trajectories can yield better performance, it could also lead to undesirable outcomes not observed in the dataset, representing underrepresented returns. Thus, should the primary focus be on improving the stitching capability?\\n2. In line 167, $R_t$ is defined as the discounted return, while Decision Transformer (DT) treats it as the undiscounted sum of rewards. What is the rationale behind using the discounted return in this context?\\n3. In lines 185\\u2013186, how does the action-value $q_t$ provide the model with the ability to stitch sub-optimal trajectories?\\n4. Why was a bi-directional Transformer chosen, given that most DT-based approaches utilize a causal Transformer to predict actions in an auto-regressive manner? This causal structure is typically important during inference to predict future actions based on historical sequences.\\n5. Is the comparison of results in Table 1 equitable? The Doctor method requires sampling N target returns for alignment. Do the baseline methods also allow for such extensive sampling of target returns to construct their final results?\\n6. What is the time complexity of inference when compared to other baselines?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Part 1\", \"comment\": \"**Weaknesses:**\\n>1. Model Choice (Bidirectional vs. Causal Transformer): The use of a bidirectional transformer, rather than a causal transformer, is >not fully justified.\", \"re\": \"We would like to clarify that our method does not rely on the 'Reward Modeling Assumption'. The better alignment and more robust performance of Doctor come from the double-check mechanism. Previous works such as DT assume that the return-conditioned learned model can generate a trajectory that achieves the desired return. We demonstrate that this is not the case, as previous works only care about improvement in the final performance and ignore the alignment assumption. In our work, we introduce a Q-function to double-check the alignment with the target returns, serving as a twofold verification (see appendix A in our revised version for more explanation.\"}", "{\"summary\": \"This paper proposes to combine supervised learning and temporal difference learning for offline reinforcement learning. First, the model is pretrained by randomly masking a subset of a trajectory and predicting it. In temporal difference learning, the model is trained to predict the action value. Experiments show promising results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. It looks reasonable to combine supervised learning and temporal difference learning for offline reinforcement learning.\\n2. The paper is easy to follow.\\n3. Experiments show promising results.\", \"weaknesses\": \"1. I am not sure about the novelty as self-supervised learning and value function learning are two common techniques.\\n2. Ablation studies about the design choices of the proposed framework are needed.\\n3. The performance over the baselines on different tasks seems inconsistent, as shown in Table 1.\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the author's response.\\n\\nIn terms of final task performance, the proposed method does not outperform the latest state-of-the-art approaches. While the author highlights the motivation of aligning with the target return, the necessity of this motivation is not adequately justified in the current version of the article. It is recommended that the next version provide further explanations or experimental evidence to substantiate the importance of this alignment. Additionally, there is a lack of comparison with other alignment methods, particularly in terms of alignment itself, rather than solely focusing on final performance. For these reasons, I have maintained the current score.\"}", "{\"title\": \"Part 2\", \"comment\": \"**Questions**\\n\\n>4. Why was a bi-directional Transformer chosen, given that most DT-based approaches utilize a causal Transformer to predict >actions in an auto-regressive manner? This causal structure is typically important during inference to predict future actions based >on historical sequences.\", \"re\": \"Although Doctor samples N target returns for alignment, we process them as a batch and feed them into the model, resulting in relatively low computational overhead compared to other methods. \\n\\nRegarding the baseline methods, they are not suitable for the sampling strategy used by Doctor. This is because these methods typically lack a Q-function. Without a Q-function, even if target returns were sampled, these methods would have no mechanism to determine the optimal actions for the target return alignment.\\n\\n>6. What is the time complexity of inference when compared to other baselines?\\n\\nWe have included a detailed comparison of inference speed and computational overhead with other models in the appendix. The results demonstrate that Doctor maintains efficient during inference compared to other approaches.\\n\\n| Time Complexity | DT | MTM | QT | Doctor |\\n|------------------|-------|-------|-------|--------|\\n| Inference (seconds) | 0.01 | 0.056 | 0.016 | 0.065 |\\n| Training (seconds) | 2.13 | 1.27 | 2.51 | 1.34 |\\n\\n\\nWe hope these additions and clarifications address your concerns.\"}" ] }
BxBt8WLfqE
Informed Machine Learning with a Stochastic-Gradient-based Algorithm for Training with Hard Constraints
[ "Qi Wang", "Christian Piermarini", "Frank Edward Curtis" ]
A methodology for informed machine learning is presented and its effectiveness is shown through numerical experiments with physics-informed learning problems. The methodology has three main distinguishing features. Firstly, prior information is introduced in the training problem through hard constraints rather than through the typical modern practice of using soft constraints (i.e., regularization terms). Secondly, the methodology does not employ penalty-based (e.g., augmented Lagrangian) methods since the use of such methods results in an overall methodology that is similar to a soft-constrained approach. Rather, the methodology is based on a recently proposed stochastic-gradient-based algorithm that maintains computationally efficiency while handling constraints with a Newton-based technique. Thirdly, a new projection-based variant of the well-known Adam optimization methodology is proposed for settings with hard constraints. Numerical experiments on a set of physics-informed learning problems show that, when compared with a soft-constraint approach, the proposed methodology can be easier to tune, lead to accurate predictions more quickly, and lead to better final prediction accuracy.
[ "nonlinear optimization", "stochastic gradient methods", "constrained optimization", "physics-informed learning" ]
Reject
https://openreview.net/pdf?id=BxBt8WLfqE
https://openreview.net/forum?id=BxBt8WLfqE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zIWqIw70Kx", "xnaIwYXgd1", "xmvuotAXfZ", "xXtvXfGGJb", "xLmoslsYHK", "uJGlhHLPIV", "mBSapFPg9m", "kHbaoY07dq", "iNpiWc4qIO", "fC1kYp8mjM", "d3ZBPl80dn", "bUbyVAzFL3", "WGRIbGh6G8", "VdcdCt5LlS", "V7aWZyAyuP", "UAUEOTbEYD", "RhkJGeEry1", "Oll5inmgWn", "ML3LwEsZvK", "Ehkv5maFNF", "CJPkRUuir2", "A0UyqIq5qc", "3gF7P7bc3I", "0onVKbzQOo" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review" ], "note_created": [ 1732732687990, 1731968924768, 1732348561108, 1730741573373, 1737523937858, 1731968775251, 1732347836158, 1732516199503, 1731968986113, 1731472949497, 1729772095959, 1730570961670, 1732528024793, 1732347746420, 1732347897824, 1731969095710, 1731968823548, 1732541469897, 1732549358233, 1731969136875, 1732619672835, 1730641589634, 1732732940263, 1734648944221 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Reviewer_CkE3" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Area_Chair_odg5" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Area_Chair_odg5" ], [ "ICLR.cc/2025/Conference/Submission8860/Reviewer_nK6C" ], [ "ICLR.cc/2025/Conference/Submission8860/Reviewer_FQAx" ], [ "ICLR.cc/2025/Conference/Submission8860/Reviewer_nK6C" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Reviewer_29rd" ], [ "ICLR.cc/2025/Conference/Submission8860/Reviewer_CkE3" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Reviewer_FQAx" ], [ "ICLR.cc/2025/Conference/Submission8860/Reviewer_29rd" ], [ "ICLR.cc/2025/Conference/Submission8860/Authors" ], [ "ICLR.cc/2025/Conference/Submission8860/Area_Chair_odg5" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for further engagement and questions.\\n - **For question 1**, the addition of more hard constraints has two effects: It adds more nonlinearity to the problem and restricts further the search directions in early iterations. This initially has the effect of steering the algorithm toward solutions with larger PDE residuals, as shown in the table below at Epochs 100, 1000, and 5000. However, as the algorithm progresses, the 9-hard-constraint model yields smaller overall PDE residuals. This is reasonable since the additional constraints helps to guide the algorithm to better solutions, ultimately leading to improved predictions and a lower total loss. We are happy to add this discussion to Appendix E.\\n\\n **Table**: performance of P-Adam(con) and P-Adam(con)-9-constr at selected epochs. Both are the running performance at one of the five randome runs. \\\\|\\\\|c\\\\|\\\\|_inf represents the $\\\\ell\\\\infty$ norm of the hard constraints.\\n\\n | | P-Adam(con) | | | | | P-Adam(con)-9-constr | | | |\\n |-------|-------------|----------|-------------------|-----------------|---|----------------------|----------|-------------------|-----------------|\\n | Epoch | Loss | PDE Residual | Data-fitting Loss | \\\\|\\\\|c\\\\|\\\\|_inf | | Loss | PDE Residual | Data-fitting Loss | \\\\|\\\\|c\\\\|\\\\|_inf |\\n | 0 | 3.10E-01 | 1.46E+01 | 3.09E-01 | 4.14E+00 | | 3.10E-01 | 1.46E+01 | 3.09E-01 | 4.17E+00 |\\n | 100 | 3.08E-01 | 1.83E+01 | 3.06E-01 | 3.64E+00 | | 3.10E-01 | 1.15E+01 | 3.09E-01 | 3.65E+00 |\\n | 1000 | 7.79E-02 | 4.78E+02 | 3.01E-02 | 1.19E+01 | | 3.72E+01 | 3.42E+05 | 2.93E+00 | 7.36E+02 |\\n | 5000 | 5.86E-03 | 5.28E+01 | 5.89E-04 | 3.74E+00 | | 2.80E-01 | 1.46E+03 | 1.34E-01 | 1.41E+01 |\\n | 10000 | 3.39E-03 | 3.34E+01 | 4.68E-05 | 3.13E+00 | | 8.15E-05 | 7.70E-01 | 4.49E-06 | 3.10E+00 |\\n | 20000 | 7.28E-04 | 6.26E+00 | 1.03E-04 | 1.71E+00 | | 3.68E-05 | 3.50E-01 | 1.82E-06 | 1.63E+00 |\\n | 30000 | 2.71E-04 | 2.49E+00 | 2.17E-05 | 1.43E+00 | | 2.02E-04 | 1.99E+00 | 3.51E-06 | 1.55E+00 |\\n* **For question 2**, \\n 1. For the concern that the error performance at the hard constraints is similar to that at other times, we highlight that ODE/PDE constraints exhibit a strong dependency on neighboring inputs. For instance, in the Spring problem, the first hard constraint is:\\n\\\\begin{equation*}\\n m \\\\frac{d^2 u(t)}{d t^2} + \\\\mu \\\\frac{d u(t)}{d t} + k u(t) = 0 \\\\quad \\\\text{at}\\\\quad t = \\\\frac{4}{29}.\\n\\\\end{equation*}\\nIf this quantity is close to zero, we expect the constraint value to also be close to zero at, e.g., $t=\\\\frac{3}{29}, \\\\frac{5}{29}$, etc. This behavior is evident in Figure 14, as we discussed in Line 863: \\\"The ODE residuals are significantly reduced at and near the times treated as hard constraints, i.e., $\\\\{\\\\tfrac{4}{29}, \\\\tfrac{12}{29}, \\\\tfrac{21}{29}\\\\}$, when comparing the soft-constrained method (Adam(unc)) to the hard-constrained methods.\\\"\\n\\n 2. Regarding the concern about comparing the performance of Adam(con) and P-Adam(con) at hard constraints residuals, we emphasize that both algorithms aim to find a KKT solution of the constrained problem (1), i.e., a point $w$ associated with some $\\\\lambda$ such that\\n \\\\begin{align*}\\n \\\\nabla f(w) + \\\\nabla c(w) \\\\lambda = 0 \\\\quad \\\\text{ and }\\\\quad c(w) = 0.\\n \\\\end{align*}\\n Therefore, a solution $w$ with better constraint residual $||c(w)||$ does not necessarily indicate a better solution overall. For the PIML problem, Figures 1, 3, and 5 display the total training loss, which combines some data-fitting loss and some residual of the differential equations. We believe this metric is a more meaningful measure of algorithm performance in solving PIML problems. Additionally, Figures 2, 4, 6, and 7 demonstrate that P-Adam(con) produces accurate predictions, suggesting that the predictions adhere to the physical rules governing the system.\"}", "{\"comment\": \"**Response to Weakness 1**: Our main motivation is to develop a hard-constrained method for solving physics-informed machine learning problems. Through our experiments, we demonstrate that our hard-constrained approach (P-Adam-con), which incorporates additional hard constraints, outperforms the penalty-based method (Adam-unc), as discussed in Lines 57\\u201362. Our intuition is that hard constraints guide the neural network to prioritize mapping the PDE solution even only at certain inputs, enabling faster and more efficient training. We are happy to incorporate this explanation into Section 1. We will upload a revised PDF by November 21.\\n\\n**Response to Weakness 2**: \\nWe disagree with the assertion that Sections 1.1(a) and 1.1(b) exaggerate our contributions. Regarding Section 1.1(a), our proposed method---Algorithm 1 with steps computed using Algorithm 2---is indeed a novel approach for solving general constrained stochastic objective optimization problems. (To say this is not novel is to say that Adagrad and Adam are not distinct from each other, which we do not think is a reasonable assertation.) This innovation extends naturally to solving hard-constrained physics-informed machine learning problems. For Section 1.1(b), the computational efficiency of our method is addressed in Section 2.3, where we detail an efficient approach for solving the linear system (3) when the matrix $H$ is diagonal. This contribution is also novel and highlights a key advantage of our approach. \\n\\nRegarding Section 2.1, we did not claim to have designed Algorithm 1 or developed Theorem 1. These were appropriately cited from Berahas et al. (2021), (in Line 109-110, Line 136, Line 162-165). Instead, the purpose of Section 2.1 is to introduce this algorithmic framework and demonstrate how it can be applied to solve physics-informed machine learning problems with hard constraints. This framework forms the foundation of our new proposed method, Algorithm 2, which features a unique step computation mechanism.\\n\\nWe are happy to incorporate these clarifications into the manuscript. We will upload a revised PDF by November 21.\\n\\n**Response to Weakness 3**: The difference between our method and that of M\\u00e1rquez-Neila et al. (2017), namely, the application of a projection operator to the gradient, is a crucial modification that results in superior performance, as demonstrated in the experiments section. (In many cases, the hard-constrained approach of M\\u00e1rquez-Neila et al. (2017) offers worse performance than a soft-constrained approach, whereas with our modification the hard-constrained approach indeed becomes better than a soft-constrained one.) The rationale for employing this projection is explained in Lines 181\\u2013191. When $H$ is the identity, the component of the search direction in the range space of $\\\\nabla c(w_k)$, denoted by $v_k$, can be computed by $v_k = -\\\\nabla c(w_k) (\\\\nabla c(w_k)^T \\\\nabla c(w_k))^{-1} c(w_k)$, and the component in the null space of $\\\\nabla c(w_k)^T$, denoted by $u_k$, can be computed as $u_k = -Z_k(Z_k^TZ_k)^{-1}Z_k^Tg_k$. Note $Z_k(Z_k^TZ_k)^{-1}Z_k$ is a projection operator onto $\\\\text{Null}(\\\\nabla c(w_k)^T)$. In other words, the search direction can be decomposed into two parts: one part is independent of the current evaluated stochastic gradient $g_k$, while the other part is solely the projected stochastic gradient. Consequently, when taking momentum of the stochastic gradient to compute the search direction, the components of the stochastic gradients lying in $\\\\text{Range}(\\\\nabla c(w_k))$---which do not affect the current step computation---should not be accumulated. Therefore, we project out this component, as shown in Line 1 of Algorithm 2.\\n\\nThe reviewer should also recognize that our explanation is applicable even though the Adam approach does not use $H$ as the identity matrix. It should be understood this way: Adam involves running averages of gradient and squared-gradient components. In other words, it does not involve running averages of scaled gradient and squared-gradient components. For the same reason, our projected Adam method involves running averages of projected gradient and projected squared-gradient components, even though the ultimate step is computed with the diagonal scaling applied. Overall, our approach is entirely consistent with Adam for the unconstrained setting.\"}", "{\"comment\": \"Dear Reviewer, we have updated the PDF. The changes mentioned to be updated on November 21 have been incorporated. Appendix F discusses the performance of increasing hard constraints by selecting more inputs or using the same number of constraints by aggregating ODE residuals at multiple times. Appendix F also shows, in the Spring problem, how the PDE residual (both soft and hard constraints) behaves after training.\\n\\nFor **Experiment Comment 4**, we found that the experiment indeed uses a subset of soft constraints to define hard constraints. Specifically, we selected times from the soft constraints that are closest to {0.14, 0.4, 0.7} to define the hard constraints. This resulted in times {4/29, 12/29, 21/29}. We have corrected this in the PDF.\\n\\nFor **Comments Regarding Presentation 2**, we did not move Appendix C to the main body in this version, as we have added other appendix sections that may also be considered for inclusion in the main body. We might consider doing so in future versions.\"}", "{\"summary\": \"The paper proposes a new methodology in the area of physics-informed machine learning by incorporating \\u201chard constraints\\u201d during stochastic-gradient-based training. The approach differs from traditional \\u201csoft-constrained\\u201d methods that add penalty terms to the objective function, which can be difficult to tune and less effective. The main innovation is a novel projection-based variant of the Adam optimizer (P-Adam), adapted for hard-constrained optimization. Numerical experiments show that this approach outperforms traditional Adam on four tasks: a 1D spring oscillator, a chemical engineering reaction model, 1D Burgers' equation, and 2D Darcy flow.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Strengths\", \"Unlike conventional methods based on penalty terms, the use of hard constraints directly embeds domain-specific knowledge. The performance improvements based on this are demonstrated in the case studies.\", \"The authors include a rigorous discussion of the optimization method, starting from the original constrained Sequential Quadratic Programming (SQP) framework settings.\", \"The experimental results reveal that the proposed methodology leads to better prediction accuracy and requires fewer hyperparameter adjustments, which is advantageous for real-world applications where tuning may be computationally expensive.\"], \"weaknesses\": [\"Weaknesses\", \"The novelty of this work could use clarification. The new techniques do not seem to differ methodologically from the related algorithms presented, e.g., in M\\u00e1rquez-Neila et al., Berahas et al., and Curtis et al., except the introduction of momentum methods. Moreover, projection of optimization steps given hard constraints has been demonstrated, e.g., by Chen et al. (https://arxiv.org/abs/2402.07251).\", \"The experimental settings considered have questionable relevance to the state of the art in this area. The methods are only compared against standard and soft-constrained Adam, rather than the bulk of methods for physics-informed machine learning. Moreover, the generalization to practical problems is unclear. Only small-scale problems are considered, enabling the use of non-stochastic gradient descent, and half the data in the batch setting (which may not be realistic).\"], \"questions\": [\"Questions\", \"Pg 3 suggests that the Lipschitz constants for the objective gradient and constraint Jacobian can be estimated. What effects does this have on the convergence or the exactness of the method?\", \"Can \\u201calmost-surely\\u201d be defined precisely throughout? Does this refer to a probability?\", \"Pg 6 suggests that only a few terms should be treated as \\u201chard constraints.\\u201d Is there a systematic way to determine which terms should be treated as hard constraints?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"**Weakness 1**: The novelty of this work could use clarification. The new techniques do not seem to differ methodologically from the related algorithms presented, e.g., in M\\u00e1rquez-Neila et al., Berahas et al., and Curtis et al., except the introduction of momentum methods. Moreover, projection of optimization steps given hard constraints has been demonstrated, e.g., by Chen et al. (https://arxiv.org/abs/2402.07251).\\n\\n**Response**: Thank you for this comment and pointing out the reference Chen et al., which is related to our work but only thematically. We are happy to make changes to the manuscript PDF to address this and other comments. We will upload the revised PDF on November 21. \\n\\nWe will clarify the relationship between Algorithm 1 and the works of M\\u00e1rquez-Neila et al., Berahas et al., and Curtis et al. in the manuscript. Lines 107\\u2013110 state that Algorithm 1 is a simplified version of Berahas et al. (2021). On Line 136, immediately following Theorem 1, which presents the convergence theory for Algorithm 1, we also cite Berahas et al. (2021, Corollary 3.14) and Curtis et al. (2023a, Equation (16)).\\n\\nAlgorithm 2 is a newly proposed algorithm introduced in this manuscript. It differs from Berahas et al. (2021, Corollary 3.14) and Curtis et al. (2023a, Equation (16)) in that Algorithm 2 incorporates the momentum of the stochastic gradient. While Algorithm 2 shares some similarities with M\\u00e1rquez-Neila et al. (2017), we clarified the key difference in Lines 182\\u2013185. Specifically, Algorithm 2 utilizes the momentum of the component of the stochastic gradient in the null space of $\\\\nabla c(w_k)^T$ rather than the entire stochastic gradient. This is a nontrivial change. (For example, the practical diagonal-scaling variants of Adam and Adagrad are considered distinct methods. Ours is as distinct as these methods are from each other.) In our experiments, this distinction demonstrates the superior performance of our method compared to that of M\\u00e1rquez-Neila et al. (2017). \\n\\nFor the method in Chen et al. (https://arxiv.org/abs/2402.07251), it is limited to constraints of the form $Bu(x) + Ax = b$, where $x$ is the input, $u(x)$ is the neural network approximating the PDE solution, and $(A, B, b)$ are given parameters. In other words, their method is applicable only to linear relationships between PDE inputs and solutions. Consequently, their approach cannot be applied to solve all the problems in our experiments, where the constraints involve nonlinear relationships between the input and the PDE solution.\\n\\nOur method is capable of handling problems with general nonlinear and nonconvex constraints. Additionally, their use of projection differs from ours: they project the model predictions onto the feasible region defined by the linear equality constraints, whereas we only use the projected stochastic gradient when taking momentum. Otherwise, our approach allows iterates to be infeasible, which is the case for the best-performing state-of-the-art methods for solving constrained optimization problems.\\n\\n**Weakness 2**: The experimental settings considered have questionable relevance to the state of the art in this area. The methods are only compared against standard and soft-constrained Adam, rather than the bulk of methods for physics-informed machine learning. Moreover, the generalization to practical problems is unclear. Only small-scale problems are considered, enabling the use of non-stochastic gradient descent, and half the data in the batch setting (which may not be realistic).\\n\\n**Response**: Thank you for this comment. However, since it does not specify a particular method for physics-informed machine learning that the reviewer believes is directly comparable to our method, we would prefer some guidance from the reviewer's expertise. If given guidance, then we would be happy to provide the results of an experiment in the paper. Other approaches in the literature either fall into the category of soft-constrained methods or are not directly comparable to our approach due to distinct differences in per-iteration cost (e.g., other methods have much higher per-iteration cost) or other features. We would be happy to include a comparison with a method that the reviewer can direct us to that is directly comparable.\"}", "{\"comment\": \"Dear reviewer, we have updated the PDF. The changes mentioned will be updated on November 21 are incorporated.\"}", "{\"comment\": \"Dear reviewers,\\n\\nThe authors have provided individual responses to your reviews. Can you acknowledge you have read them, and comment on them as necessary? The discussion will come to a close very soon now:\\n- Nov 26: Last day for reviewers to ask questions to authors.\\n- Nov 27: Last day for authors to respond to reviewers.\\n\\nYour AC\"}", "{\"comment\": \"**Weakness 1**: **Scalability of constraints**: While the authors argue that a small number of constraints often suffices for good performance, the scalability of the method as the number of constraints increases is not extensively discussed. As the number of constraints grows, the projection step may become less efficient, and solving the constraint-related equations can become computationally intensive. It would be beneficial for the authors to elaborate, either theoretically or empirically, on the algorithm's limitations and performance when dealing with a larger set of constraints, this could be accomplished on more scalable problems, which also help tackle the limited empirical scope weakness mentioned below.\\n\\n**Response**: We thank the reviewer for this comment. For the computational cost, we denote the number of constraints as $m$. The main computation cost of Algorithm 2 is Line 1 and Line 6. In Line 1, the computation cost of computing $\\\\bar{g}_k$ is as follows: cost of $(\\\\nabla c(w_k)^T\\\\nabla c(w_k))^{-1}$ is $\\\\mathcal{O}(m^2n + m^3)$, cost of $(\\\\nabla c(w_k)^T\\\\nabla c(w_k))^{-1}\\\\nabla c(w_k)^Tg_k$ is $\\\\mathcal{O}(m^2n + m^3 + m^2 + mn)$. Hence the cost of computing $\\\\bar{g}_k$ is $\\\\mathcal{O}(m^2n + m^3)$. For Line 6 when computing $s_k$, as discussed in Section 2.3, we discuss the cost of computing $u$ and $v$ for $s = u + v$. The cost of computing $v$ (Line 225) is $\\\\mathcal{O}(m^2n + m^3)$. The cost of computing $u$ (Line 238) is $\\\\mathcal{O}(m^2n + m^3)$, similarly to computing $\\\\bar{g}_k$. Therefore, the overall of Algorithm 2 is $\\\\mathcal{O}(m^2n + m^3)$. When $m \\\\ll n$, it is $\\\\mathcal{O}(n)$.\\n\\nEmpirically, we can add experiments comparing performance when increasing the number of hard constraints by using more samples in defining the hard constraints. In addition, as in our response to Reviewer CkE3, one way to use more samples to compose hard constraints without increasing the total number of constraints (to account for computational costs) is that one can generate certain clusters of samples, and for each cluster, the average PDE residual can be treated as a hard constraint to be zero. We are happy to incorporate such experiments to make a comparison. We will update the manuscript PDF with these details by November 21.\\n\\n**Question**: How is the method different and similar to reference (https://arxiv.org/abs/2402.07251), which deals with linear equality constraint in PINN through projection layers derived from KKT conditions\\n\\n**Response**: We thank the reviewer for this question and bringing this reference. As our response to Reviewer CkE3, the method in Chen et al. (https://arxiv.org/abs/2402.07251) is limited to constraints of the form $Bu(x) + Ax = b$, where $x$ is the input, $u(x)$ is the neural network approximated PDE solution, and $(A, B, b)$ are given parameters. In other words, their method is applicable only to linear relationships between PDE inputs and solutions. Consequently, their approach cannot be applied to solve all the problems in our experiments, where the constraints involve nonlinear relationships between the input and the PDE solution. In contrast, our method is capable of handling problems with general nonlinear and nonconvex constraints. Additionally, their use of projection differs from ours: they project the model predictions onto the feasible region defined by the linear equality constraints, whereas we use the projected stochastic gradient only when taking momentum. Otherwise, our approach allows iterates to be infeasible, which is the case for the best-performing state-of-the-art methods for solving constrained optimization problems.\"}", "{\"title\": \"authors - reviewers discussion open until November 26 at 11:59pm AoE\", \"comment\": \"Dear authors & reviewers,\\n\\nThe reviews for the paper should be now visible to both authors and reviewers. The discussion is open until November 26 at 11:59pm AoE.\\n\\nYour AC\"}", "{\"summary\": \"The paper brings together the stochastic gradient-based SQP framework by Berthas et al. and the popular Adam optimization algorithm for ML-based solving of partial/ordinary differentiable equations with hard constraints.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The topic of the study is interesting.\\nI am happy to revise my evaluation based on the answers to my questions.\", \"weaknesses\": \"One may argue that novelty and originality are limited.\\nAlgorithm 1 and the underlying Theorem 1 are based on previous work (Berthas et al., 2021). \\nExcept the empirical evaluations, the main novel aspect boils down to plugging in an Adam-like update rule into Algorithm 1. \\n\\nThe basic idea in Algorithm 2 that the \\u201estochastic gradient gk is replaced by its orthogonal projection onto the null space of \\u2207c(wk )T\\u201c in a mixture of Adam and Algorithm 1 is clear. However, this key aspect - perhaps almost trivial - should be spelled out because it is so important. That is, line 1 in Algorithm 2 should be derived as used in the algorithm with all intermediate steps.\\n\\nThe manuscript states \\u201eWhen the number of rows of J (i.e., m) is small, the overall cost is proportional to that of computing H\\u22121g with a symmetric and positive definite H, as is required for an Adam-based method for the unconstrained setting.\\u201c But there a no non-diagonal matrices in place in Adam, right? Is this under the assumption that H is diagonal (e.g. the identity matrix I)? Why is the scaling of Adam and P-Adam then the same?\\n\\nIn fact, the scaling w.r.t. to the number of constraints could be problematic.\\nAlthough wall clock time experiments are always a bit problematic (e.g., the may depend on the solver), I suggest to plot performance over wall clock time depending on the number of hard constraints.\\nThe number of hard constraints in the experience was always very small, some scaling experiments would be interesting to see.\\n\\n## Experiments\\n\\nThe differences between the methods are not always very pronounced.\\nThe are only small differences in general in the \\u201eChemical Engineering Problem\\u201c and between Adam(inc) and P-Adam in 1D Burgers\\u2019 Equation, and also in general on the \\u201e2D Darcy flow\\u201c task.\\n\\nBaseline experiments using the stochastic gradient-based SQP with steepest descent as in Algorithm 1 are missing.\\nHow much do we actually gain from moving to P-Adam if we tune the SGD learning rate?\\n\\nThe reader wonders how well the \\u201ehard\\u201c constraints are met. For example, what is the value of the mass balance during the course of optimisation for the different methods?\\nIt would be nice to see the ODE-residual errors for the points that were treated as soft targets vs this which were linked to \\u201ehard\\u201c constraints.\", \"1d_spring\": \"The times at which the ODE-residual terms were defined were 30 evenly spaced points over [0,1] (end page 6).\\nThe P-Adam considered the hard constraints on ODE-residuals at the time points {0.14, 0.4, 0.7}.\\nThese were different points that the equally spaced 30, right?\", \"if_yes\": \"Is this fair? Why was no subset selected? How do the results change if for the settings without hard constraints the three points are added to the soft constraints (i.e., 33 instead of 30 points are computed in the ODE-residual error component of the error)?\\n\\n## Comments regarding presentation\\n\\nThe listing of the methods in lines 297-299 is confusing. Should it be \\u201eAdam(con)\\u201c instead of \\u201eAdam(unc)\\u201c in line 299? \\n\\nRather put Appendix C on runtime in the main body of the paper and move some specification of the (standard) benchmark problems to the appendix.\", \"questions\": [\"How is line 1 in Algorithm 2 exactly derived? Could you spell this out so that it is easy to follow (it is one of the main aspects of the study)?\", \"Why is it correct to state (lines 240-243) that computing H^-1g is proportional to what is require in a standard Adam step?\", \"How much do we actually gain from moving from stochastic gradient-based SQP as in Algorithm 1 to P-Adam if we tune the SGD learning rate?\", \"How is the wall-clock time scaling w.r.t to the number of \\\"hard\\\" constraints?\", \"How well are the \\\"hard\\\" constraints met in practice? Nor always exactly, right?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates informed machine learning by proposing a novel approach that integrates hard constraints directly into the optimization process, as opposed to previous methods that formulate informed constraints as soft constraints relying on penalty techniques (e.g., augmented Lagrangian methods). Building upon a recent stochastic gradient descent (SGD) method for constrained optimization, the authors incorporate hard constraints using a Newton-based technique. By employing a projection-based approach, they enable the handling of hard constraints within the well-established Adam optimizer. The method is empirically demonstrated in several small-scale experiments to exhibit robust and superior performance compared to other methods that do not treat constraints as hard constraints.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Hard constraint handling in Adam** The paper proposes to integrate projected gradient descent with Lagrange multipliers into the stochastic optimization framework, specifically adapting the Adam optimizer to handle hard constraints. By modifying the standard Adam update rule to project gradients onto the null space of the constraint Jacobian, ensures current solution feasibility at each iteration.\", \"**Computational Efficiency in small constraint scale**: When the Hessian matrix is diagonally approximated and has a small constraint number (low-rank constraint Jacobian), the constraint-related equations can be solved efficiently which does not trigger much computationally overhead.\"], \"weaknesses\": [\"**Scalability of constraints**: While the authors argue that a small number of constraints often suffices for good performance, the scalability of the method as the number of constraints increases is not extensively discussed. As the number of constraints grows, the projection step may become less efficient, and solving the constraint-related equations can become computationally intensive. It would be beneficial for the authors to elaborate, either theoretically or empirically, on the algorithm's limitations and performance when dealing with a larger set of constraints, this could be accomplished on more scalable problems, which also help tackle the limited empirical scope weakness mentioned below.\", \"**Lack of Convergence proof** Although the paper references the convergence properties of the SGD-SQP method, these theoretical guarantees do not naturally extend to the proposed P-Adam algorithm while left as future work.\", \"**Limited Empirical Scope** The experimental validation presented in the paper is limited to small-scale problems with relatively simple neural network models. This restricted scope makes it challenging to assess the method's effectiveness and scalability in more complex or large-scale applications. Expanding the empirical evaluation to include larger datasets and more sophisticated models would strengthen the paper's claims regarding the practical utility of the proposed method.\"], \"questions\": \"- How is the method different and similar to reference [1], which deals with linear equality constraint in PINN through projection layers derived from KKT conditions\\n\\n[1] https://arxiv.org/abs/2402.07251\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I have read the rebuttal(s).\\nThe efforts to improve the manuscript are appreciated.\\n\\nOne of my main questions/concerns was about how ODE-residual errors for the points that were treated as soft targets compared to points which were linked to ``hard'' constraints.\\nTo address this, the authors added Appendix F / Figure 14. This is appreciated. However, I would have preferred to see a plot like this for all problems. More important, I think the plot does not look very convincing. One might argue that Adam(con) did a better than P-Adam(con) for the hard constraints. Furthermore, the differences in residuals between soft and \\\"hard\\\" constraints are not very pronounced. I think this needs to be studied further.\"}", "{\"comment\": \"Dear reviewer, we have updated the PDF. The changes mentioned will be updated on November 21 are incorporated. Appendix F discussed the performance for increading hard constraints by selecting more inputs or using the same number of constraints by aggregating ODE residuals at multiple times.\"}", "{\"comment\": \"Dear reviewer, we have updated the PDF. The changes mentioned will be updated on November 21 are incorporated. Appendix F discussed the performance for increading hard constraints by selecting more inputs or using the same number of constraints by aggregating ODE residuals at multiple times.\"}", "{\"comment\": \"**Weakness**:\\nOne may argue that novelty and originality are limited. Algorithm 1 and the underlying Theorem 1 are based on previous work (Berthas et al., 2021).\\nExcept the empirical evaluations, the main novel aspect boils down to plugging in an Adam-like update rule into Algorithm 1.\\n\\n**Response**: Although Algorithm 1 and its corresponding Theorem 1 are based on previous work (Berthas et al., 2021), it is novel to employ it for solving physics-informed machine learning problems using the hard-constrained problem formulation. Otherwise, indeed, a main contribution of our work is the incorporation of an Adam-type update rule. Our experiments show that doing this in a direct manner, such as in M\\u00e1rquez-Neila et al. (2017), does not lead to consistently better results. However, the use of our projection-based Adam technique performs much better, and often outperforms a soft-constrained approach.\\n\\n**Weakness**:\\nThe basic idea in Algorithm 2 that the ``stochastic gradient $g_k$ is replaced by its orthogonal projection onto the null space of $\\\\nabla c(w_k)T$'' in a mixture of Adam and Algorithm 1 is clear. However, this key aspect - perhaps almost trivial - should be spelled out because it is so important. That is, line 1 in Algorithm 2 should be derived as used in the algorithm with all intermediate steps.\\n\\n**Response**: We thank the reviewer for this comment. We are happy to change the presentation of Algorithm 2 and include the entire loop of the algorithm as in Algorithm 1. We will update the manuscript PDF on November 21.\\n\\n**Weakness**:\\nThe manuscript states ``When the number of rows of $J$ (i.e., $m$) is small, the overall cost is proportional to that of computing $H^{-1}g$ with a symmetric and positive definite $H$, as is required for an Adam-based method for the unconstrained setting.'' But there a no non-diagonal matrices in place in Adam, right? Is this under the assumption that $H$ is diagonal (e.g. the identity matrix I)? Why is the scaling of Adam and P-Adam then the same?\\n\\n**Response**: This appears in Line 242. Yes, it is under the assumption that $H$ is diagonal, as our statement in Line 113-115. We will edit the manuscript to add the word ``diagonal\\\" in Line 242 and upload on November 21. The details of the computational cost can be found in our response to Reviewer FQAx, i.e., the overall cost of Algorithm 2 is $\\\\mathcal{O}(m^2n + m^3)$. When $m \\\\ll n$, it is $\\\\mathcal{O}(n)$, which is the same as the computation cost of Adam.\\n\\n**Weakness**:\\nIn fact, the scaling w.r.t. to the number of constraints could be problematic. Although wall clock time experiments are always a bit problematic (e.g., the may depend on the solver), I suggest to plot performance over wall clock time depending on the number of hard constraints. The number of hard constraints in the experience was always very small, some scaling experiments would be interesting to see.\\n\\n**Response**: As in our response to Reviewer FQAx, we will add experiments comparing performance when increasing the number of hard constraints by using more samples in defining hard constraints. In addition, as our response to Reviewer CkE3, one way to use more samples to compose hard constraints without increasing the total number of constraints (to account for computational costs) is that one can generate certain clusters of samples, and for each cluster, the average PDE residual can be treated as a hard constraint to be zero. We are happy to incorporate such experiments to make a comparison and will update the manuscript PDF with these details by November 21.\"}", "{\"comment\": \"(authors response continued)\\n\\n**Question 1**: Pg 3 suggests that the Lipschitz constants for the objective gradient and constraint Jacobian can be estimated. What effects does this have on the convergence or the exactness of the method?\\n\\n**Response**: For a Lipschitz constant $L \\\\in \\\\mathbb{R}_{>0}$ of an arbitrary function $f$ such that $||f(x) - f(\\\\bar{x})|| \\\\le L||x - \\\\bar{x}||$, any real constant $\\\\bar{L} \\\\ge L$ is also a valid Lipschitz constant for $f$. Hence, theoretically, if larger Lipschitz constants are used for the objective gradient and constraint Jacobian, the step size $\\\\alpha_k$ will be smaller, leading to slower convergence of the iterate sequence generated by Algorithm 1. Conversely, if the estimated Lipschitz constants are smaller than the actual values, the convergence of the iterate sequence is not guaranteed. In the experiments presented in Section 4, we did not estimate the Lipschitz constants. Instead, we followed common practice by tuning the step size. For instance, in the Spring problem, step sizes were selected from $\\\\\\\\{5\\\\times 10^{-4}, 10^{-4}\\\\\\\\}$. Additional tuning experiments and details are provided in Appendix B. \\n\\n\\n**Question 2**: Can \\u201calmost-surely\\u201d be defined precisely throughout? Does this refer to a probability?\\n\\n**Response**: In Theorem 1, ``almost-surely'' is defined with respect to all realizations of a run of Algorithm 1, meaning that the probability of the iterate sequence $\\\\\\\\{W_k\\\\\\\\}$ remaining within a convex set $\\\\mathcal{W}$ is 1. We will incorporate this clarification into the manuscript PDF and upload the revised version on November 21.\\n\\n**Question 3**: Pg 6 suggests that only a few terms should be treated as \\u201chard constraints.\\u201d Is there a systematic way to determine which terms should be treated as hard constraints?\\n\\n**Response**: There is no universally best method for selecting samples to be treated as hard constraints. Even in soft-constrained approaches, sampling methods are often employed to choose samples that form the penalty term. This is an active topic of research in the physics-informed learning community. These ideas could be used in hard-constrained methods, too. In this manuscript, our method is to randomly (uniformly) select a predetermined number of samples and treating them as hard constraints. Alternatively, if one wishes to use more samples to compose hard constraints without increasing the total number of constraints (to account for computational costs), other strategies can be employed. For example, one can generate certain clusters of samples, and for each cluster, the average PDE residual can be treated as a hard constraint to be zero. (In other words, multiple samples can be combined together to form only a single constraint. This allows complete flexibility in the number of constraints that are added for a given number of samples.) We are happy to incorporate such experiments to make a comparison and will update the manuscript PDF with these details by November 21.\"}", "{\"comment\": \"I have re-read the revised article, and I appreciate the authors' addition of a distinction description for contributions compared to previous work, as well as the inclusion of more experiments. However, my main concern still remains: why is the application of the projection operator more effective? The statement in Section 2.2 of the paper mainly indicates that this change is feasible, but why is it better? This is indeed an unusual change, so you need more description to demonstrate its effectiveness, especially for basic optimization algorithms, where high performance in experiments alone is not enough. Therefore, I believe the article still needs further improvement. However, given the authors' detailed rebuttal, I will raise the score.\"}", "{\"comment\": \"Thanks for the detailed responses and clarifications. The clarification of mathematical notations (e.g., in Q1-Q2) is helpful and I will increase my score accordingly.\"}", "{\"comment\": \"(authors response continued)\\n\\n**Experiment comment 2**:\\nBaseline experiments using the stochastic gradient-based SQP with steepest descent as in Algorithm 1 are missing. How much do we actually gain from moving to P-Adam if we tune the SGD learning rate?\\n\\n**Response**: We are happy to incorporate experiments to compare Algorithm 1 (i.e., not using momentum stochastic gradients) with P-Adam. We will update the manuscript PDF with these results by November 21.\\n\\n**Experiment comment 3**:\\nThe reader wonders how well the **hard** constraints are met. For example, what is the value of the mass balance during the course of optimisation for the different methods? It would be nice to see the ODE-residual errors for the points that were treated as soft targets vs this which were linked to ``hard'' constraints.\\n\\n**Response**: We are happy to incorporate results that demonstrate the ODE/PDE-residual errors for points treated as soft constraints and hard constraints. We will update the manuscript PDF with these results by November 21.\\n\\n**Experiment comment 4**:\", \"1d_spring\": \"The times at which the ODE-residual terms were defined were 30 evenly spaced points over [0,1] (end page 6). The P-Adam considered the hard constraints on ODE-residuals at the time points {0.14, 0.4, 0.7}. These were different points that the equally spaced 30, right? If yes: Is this fair? Why was no subset selected? How do the results change if for the settings without hard constraints the three points are added to the soft constraints (i.e., 33 instead of 30 points are computed in the ODE-residual error component of the error)?\\n\\n**Response**: We thank for this question. Yes, the points $\\\\\\\\{0.14, 0.4, 0.7\\\\\\\\}$ used for hard constraints might not belong to the set for defining soft-constraints in the objective. We will adjust the experiment setting so that the hard constraints are defined over a subset of samples defining the soft constraints. (Generally speaking it does not need to be a subset, but we are happy to make this change for our comparison here.) We will update the manuscript PDF with these results by November 21.\\n\\n**Comments regarding presentation 1**:\\nThe listing of the methods in lines 297-299 is confusing. Should it be Adam(con) instead of Adam(unc) in line 299?\\n\\n**Response**: Yes. Thanks for catching this typo. We will fix it in the PDF.\\n\\n**Comments regarding presentation 2**:\\nRather put Appendix C on runtime in the main body of the paper and move some specification of the (standard) benchmark problems to the appendix.\\n\\n**Response**: Thanks for the suggestion. We will move the running time in the main body and adjust other parts of the presentation accordingly to ensure the page limit is satisfied.\\n\\n**Question 1**:\\nHow is line 1 in Algorithm 2 exactly derived? Could you spell this out so that it is easy to follow (it is one of the main aspects of the study)?\\n\\n**Response**: Line 1 in Algorithm 2 takes the projection of the stochastic gradient $g_k$ onto the null space of $\\\\nabla c(w_k)$. Note the matrix $\\\\nabla c(w_k)(\\\\nabla c(w_k)^T\\\\nabla c(w_k))^{-1}\\\\nabla c(w_k)^T$ is the projector of the range space of $\\\\nabla c(w_k)$ and $I-\\\\nabla c(w_k)(\\\\nabla c(w_k)^T\\\\nabla c(w_k))^{-1}\\\\nabla c(w_k)^T$ is a projector of null space of $\\\\nabla c(x_k)^T$. Also, $$I-\\\\nabla c(w_k)(\\\\nabla c(w_k)^T\\\\nabla c(w_k))^{-1}\\\\nabla c(w_k)^T = Z_k(Z_k^TZ_k)^{-1}Z_k^T$$ where the columns of $Z_k$ span the null space of $\\\\nabla c(w_k)^T$. We use the project matrix $I-\\\\nabla c(w_k)(\\\\nabla c(w_k)^T\\\\nabla c(w_k))^{-1}\\\\nabla c(w_k)^T$ since it is computable through the algorithm. Our intuition of taking this projection is presented in Line 181-199.\\n\\n**Question 2-5** are addressed through the responses above.\"}", "{\"comment\": [\"Thank you to the authors for the detailed response regarding the theoretical complexity analysis and the empirical comparison of constraint numbers in Appendix E, Figure 13. I have two additional concerns:\", \"Could the authors explain the possible cause of the initial loss bumps observed in the 9-constraint cases?\", \"Upon further review of Figure 14, I share a similar question with reviewer nK6C: could the authors clarify why the error performance at the hard constraint times is similar to that at other times? Additionally, it appears that the error at constraint times for Adam(con) is quite competitive with that of P-Adam(con).\"]}", "{\"summary\": \"This paper introduces an approach that characterized by three key aspects. Initially, it incorporates prior information into the training process via hard constraints instead of the more common contemporary technique of soft constraints. Furthermore, the approach abstains from using penalty-based methods. Instead, it relies on a recently introduced stochastic-gradient-based algorithm that is computationally efficient and employs a Newton-based method for constraint management. Lastly, a projection-based adaptation of the widely recognized Adam optimization algorithm is suggested for scenarios involving hard constraints. The numerical experiments achieve superior final prediction accuracy when contrasted with a soft-constraint method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This article proposes a potential new method and demonstrates good results in numerical experiments.\", \"weaknesses\": [\"The narrative in the first section of this paper is somewhat disorganized; it does not clearly articulate the motivation behind the paper, which is, why it is essential to employ key techniques such as hard constraints, and why penalty-related algorithms are not utilized. Since the number of hyperparameters for soft constraints is not significantly large, the core difference between soft and hard constraints is not clearly identified.\", \"The serious issue with this article is that it exaggerates their contributions. The contributions stated in section 1.1 (a), (b), and the entire content of section 2.1 are in fact all derived from [1]; the authors have not made any form of innovation. The only innovative part of the entire article is section 2.2.\", \"The innovation in section 2.2 is also quite confusing. Algorithm 2 is very similar to the algorithm in [2], with the only difference being the application of a projection operator to the gradient. However, the article's explanation of why the projection is used is confusing. The conclusion in the article is that when H is chosen as the identity matrix I, the gradient and the projected gradient are the same, but the problem is that this clearly does not hold for the Adam algorithm. The article does not provide any other explanation for this distinction, nor does it have convergence theory to support it. The experimental results alone are not convincing.\", \"[1] Berahas A S, Curtis F E, Robinson D, et al. Sequential quadratic optimization for nonlinear equality constrained stochastic optimization[J]. SIAM Journal on Optimization, 2021, 31(2): 1352-1379.\", \"[2] M\\u00e1rquez-Neila P, Salzmann M, Fua P. Imposing hard constraints on deep networks: Promises and limitations[J]. arXiv preprint arXiv:1706.02025, 2017.\"], \"questions\": \"*\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their further comments.\\nFigure 14 plots the box plot of $c(w_T)$ of five random runs where $w_T$ represents the terminated solutions. To facilitate an easier comparison of residuals across the three algorithms, we added Figure 15 to the PDE. Figure 15 shows the average $|c(w_T)|$ of five random runs for all three algorithms in a grouped bar chart. Over thirty discrete times, P-Adam(con) achieves the smallest residual at 16 times, Adam(con) 13 times, and Adam(unc) only once. \\n\\n- We can prepare plots similar plots to Figure 14 for the other three problems in the coming days and include them in the camera-ready version. Over our experiments, P-Adam consistently gives the best average ODE/PDE-residuals over all inputs that define soft constraints (i.e., a superset of the points defining the hard constraints) among the three methods. Therefore, P-Adam(con) would demonstrate the smallest residuals at most inputs, as observed in the Spring problem.\\n\\n- Regarding the concern about comparing the performance of Adam(con) and P-Adam(con) at hard constraints residuals, we emphasize that both algorithms aim to find a KKT solution of the constrained problem (1) in our manuscript , i.e., a point $w$ associated with some $\\\\lambda$ such that\\n\\\\begin{align*}\\n\\\\nabla f(w) + \\\\nabla c(w) \\\\lambda = 0 \\\\quad \\\\text{ and }\\\\quad c(w) = 0.\\n\\\\end{align*}\\nTherefore, a solution $w$ with better constraint residual $||c(w)||$ does not necessarily indicate a better solution overall. For the PIML problem, Figures 1, 3, and 5 display the total training loss, which combines some data-fitting loss and some residual of the differential equations. We believe this metric is a more meaningful measure of algorithm performance in solving PIML problems. Additionally, Figures 2, 4, 6, and 7 demonstrate that P-Adam(con) produces accurate predictions, suggesting that the predictions adhere to the physical rules governing the system.\\n\\n- For the concern that differences in residuals between soft and hard constraints are not very pronounced, we respectfully disagree. We believe the differences are indeed significant. We highlight that ODE/PDE constraints exhibit a strong dependency on neighboring inputs. For instance, in the Spring problem, the first hard constraint is:\\n\\\\begin{equation*}\\n m \\\\frac{d^2 u(t)}{d t^2} + \\\\mu \\\\frac{d u(t)}{d t} + k u(t) = 0 \\\\quad \\\\text{at}\\\\quad t = \\\\frac{4}{29}.\\n\\\\end{equation*}\\nIf this quantity is close to zero, we expect the constraint value to also be close to zero at, e.g., $t=\\\\frac{3}{29}, \\\\frac{5}{29}$, etc. This behavior is evident in Figure 14, as we discussed in Line 863: \\\"The ODE residuals are significantly reduced at and near the times treated as hard constraints, i.e., $\\\\{\\\\tfrac{4}{29}, \\\\tfrac{12}{29}, \\\\tfrac{21}{29}\\\\}$, when comparing the soft-constrained method (Adam(unc)) to the hard-constrained methods.\\\"\\n\\n We would also point out to the reviewer that the prediction function family that is chosen has an effect on the ultimate residual that can be obtained. For example, for a given neural network architecture, there are limits to how accurately the PDE can be satisfied at all inputs. Our work does not focus on neural network design for solving physics-informed problems; that is the subject of the researchers of others in the field. Rather, our work aims to show that---whatever the architecture/model that is chosen---our strategy can offer better algorithmic behavior and final solution quality compared to alternative methods.\"}", "{\"metareview\": \"The paper proposes a methodology for (physics)-informed machine learning based on hard constraints and optimized with a SGD variant. Nearly all reviewers recommended rejection and agreed on the main problems of the paper: not clear what the novelty is given that lots of ML formulations do use hard constraints with an array of optimization algorithms, and much of the paper comes from previous work; limited experiments (lacking important comparison points, small-scale problems).\", \"additional_comments_on_reviewer_discussion\": \"N/A\"}" ] }
Bx5kcMkb8l
No Factor Left Behind: Towards arbitrary amount of factors in the medical cohort analysis
[ "Xuehai Wang", "Xiangdong Wang", "Hongyi Luo", "Lei Zhang", "Ping Zu", "Peng Zhu" ]
Driven by the goal of data-driven analysis on the large-scale cohort, a large language model(LLM) has solidified itself as a critical focus of artificial intelligence medical research today. However, such efforts have coalesced around a small group of evidence, leaving behind the vast majority of factors collected in the cohort investigation. What does it take to break the more than 70 factors while ensuring responsible, high-quality prediction, all while keeping medical considerations in mind? In No Factor Left Behind, we first took on this challenge by numerical interpretable evidence contextualizing the need for Premature rupture of membranes (PROM) risk assessment through exploratory interviews with domain experts. Then, we created datasets and models aimed at narrowing the performance gap between low and high-frequency factors. More specifically, we developed a model based on factor-value pairs trained on data obtained with robust and effective data mining techniques tailored for low-frequency factors. We propose multiple architectural and training improvements to counteract overfitting while training on 70 factors. Critically, we interpreted the risk of PROM over 7000 cohort participants' directions using numerical interpretable evidence with precise values of factors combined with human evaluation covering all factors in the dataset to assess medical safety. Our model achieves a performance of 79\% accuracy (78 factors) and 96\% accuracy(40 factors) with risk assessment at the screening level, laying the novel insight for realizing a general medical cohort analysis method in the era of LLMs.
[ "Medical cohort analysis", "Risk assessment", "generalization", "prompt engineering", "open source model" ]
https://openreview.net/pdf?id=Bx5kcMkb8l
https://openreview.net/forum?id=Bx5kcMkb8l
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y2rv65du9B", "rylGjv3SSp", "q3ljd4NjsY", "cUAwldmW1U", "T0rhT4nKH0", "OM6Kbrwnq8", "CzDCsYLYPi", "Ahro2gXFpN", "9PF7Yxcz8C", "5B0MXDOcG5" ], "note_type": [ "official_comment", "comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732550589881, 1732687021796, 1731004384343, 1732542858324, 1730476998477, 1732549422717, 1732546171428, 1732552264910, 1732613512938, 1729585450277 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6277/Authors" ], [ "ICLR.cc/2025/Conference/Submission6277/Authors" ], [ "ICLR.cc/2025/Conference/Submission6277/Reviewer_YLQt" ], [ "ICLR.cc/2025/Conference/Submission6277/Authors" ], [ "ICLR.cc/2025/Conference/Submission6277/Reviewer_rtzB" ], [ "ICLR.cc/2025/Conference/Submission6277/Authors" ], [ "ICLR.cc/2025/Conference/Submission6277/Authors" ], [ "ICLR.cc/2025/Conference/Submission6277/Authors" ], [ "ICLR.cc/2025/Conference/Submission6277/Reviewer_rtzB" ], [ "ICLR.cc/2025/Conference/Submission6277/Reviewer_WVYT" ] ], "structured_content_str": [ "{\"title\": \"Reply to reviewer concerns\", \"comment\": \"Thanks for your comment\\n\\nWe are sorry that latex's green hand usage made you feel challenged to interpret our study. \\nWe respect your efforts on the question about our work.\", \"w1\": \"Numerous formatting and typos can be seen in the document.\", \"a1\": \"Thanks for your comment. We are sorry about misrepresentation of equations. This process is implemented in Python to let OpenBioLLama (OBL) annate one feature and its value while the Eq5 is the loop that OBL annate all of the features that a participant owns.\", \"q1\": \"How function v in Eq 4 (or 5) is implemented in practice? That seems to have a key role in the process.\", \"q2\": \"How is the expert-augmented part implemented? Are the experts sourced from the human participants? What would this mean for the general application of this tool?\", \"a2\": \"Thanks for your comment. The expert-augmented part is implemented by training OBL on the up-to-date tokens from the domain-specific papers. Yes, the experts are sourced from human participants; since the papers are written by human experts and peer-reviewed, those papers also become the reference for human experts in clinic decision-making.\\n\\nWe are thanks again for your patience due to our formatting and typos.\\n\\nLastly, we would be grateful if you would consider the answer above to evaluate our research. Your questions will be addressed in the next version of our article.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"After the rebuttal, all authors agree to withdraw the current version of NFLB.\\n\\nWe are grateful for the reviewers' efforts to help us improve our study and representation. However, due to our lack of experience at the conference, we apologize for the late reply. We especially thank the Reviewer WVYT, for providing most of the constructive suggestions.\", \"we_will_refine_nflb_in_the_following_steps\": \"1. Taking experiments in the MIMIC, UKB, GBD, and NHANES with NLFB\\n2. Optimizing the language representation and overall flow of our manuscript\\n3. Taking Modern Benchmarks to evaluate the NFLB comprehensively.\\n\\nThe next version of NFLB will be updated soon.\\n\\nAgain, many thanks to the Reviewer WVYT.\"}", "{\"summary\": \"This study aims to enhance predictive modeling for medical risk assessment using large language models (LLMs) by addressing the challenge of integrating a broad range of low- and high-frequency factors. Through expert interviews and specialized data mining, the team developed a model capable of accurate PROM risk assessment across 70 factors, achieving 79% accuracy on 78 factors and 96% accuracy on 40 factors.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The study overall plan (while fairly unclear) seems fairly reasonable and timely.\", \"The study aims to use an LLM-based pipeline for a novel application (i.e., identifying the factors to include in medical cohorts), in a customized and expanded way.\", \"The study adopts a few solid theoretical frameworks, such as the one from epidemiology.\", \"The study includes a user (human) study, although its design and findings are a bit unclear.\"], \"weaknesses\": [\"Despite the excitement, the primary concern about this submission is about its presentation and soundness. In the submitted format, the overall rationale and design seems quite convoluted and unclear. The whole paper's flow is weak. Numerous formatting and typos can be seen in the document. For instance, the whole paragraph related to Equation 4 seems to be duplicated. Or the text refers to the left and right side of Fig 1, but it seems that the authors were referring to the top and bottom parts. It is very hard to understand Section 3.2 (a key section for the Method) and its connection to 3.3 is unclear. Unfortunately, this issue makes evaluation of the core concepts and contributions very challenging.\"], \"questions\": [\"How function v in Eq 4 (or 5) is implemented in practice? That seems to have a key role in the process.\", \"How the expert-augmented part is implemented? Are they sourced from the human participants? What would this mean for a general application of this tool?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to reviewer concerns\", \"comment\": \"Thanks for your comment.\\n\\nYour questions and concerns help us to improve the whole study and optimize our study representations.\", \"q1\": \"Our study doesn't give a context on how these results compare to benchmarks or their clinical significance\", \"a1\": \"Yes, this question is critical to our method, and we want to test our method in the other benchmark and dataset. However, to our knowledge, it is like the Medical Information Mart for Intensive Care[MIMIC(I-III)], UKBiobank, Global Burden of Disease(GBD), and the National Health and Nutrition Survey(NHANES). We have very difficulty finding the same scale dataset that we collected for our cohort, which has more than 70 factors to one consequence. The UKB has the most counted factors for the Body Mass Index(BMI), which has approximately 21 factors. As for the MIMIC, the most counted factor is the electroencephalogram (EEG), which has approximately 35 factors. The GBD, a classical and well-trusted database for epidemiology, is the most counted factor for total cancer\\nrate, which has approximately 30 factors. Last, the NHANES, since this is the cross-sectional study count in 2-3 years as a cycle(15 cycles, 30 years), has the most counted factor for lung function test, which has 45 factors. We are sorry for not testing those datasets since we have limited computing resources and are far from the counts of factors we collected.\", \"q2\": \"Thanks for your comment. This process is based on game theory to annotate pure numerical data to help LLM understand whether this feature and its value have a positive or negative effect on the consequence, case by case. We are sorry not to discuss the scalability and complexity since we can't find larger counts of factors before we submit this manuscript. As for the overfitting and managing complexity, we are sorry that we are not representing the training process of this tree-based generator. We will add this to the appendix.\", \"q3\": \"-The automatic prompt generation method's scalability is mentioned, but computational costs and feasibility for clinical use are missing. There's no mention of time or resources required, making it hard to assess practical viability.\", \"a3\": \"We are very grateful that the reviewer is concerned about the insufficient computation resources in the current BioMed area. We can't represent another medical area since we focus on maternal and infant health(MIH). To our knowledge, fine-tuning a BERT-like model to be adapted in MIH has worked sufficiently. Due to the privacy restrictions in the cohort regulation, it is difficult to transport any non-public cohort data to data centers and cloud platforms to train the Flan-T5 scale model or PEFT the open source model. We fine-tuned the PubMedBERT in 11 days with RTX 4090. We will open-source both the fine-tuned code and the MIH-related paper dataset shortly.\", \"q4\": \"The results rely on outdated models like LLama3.1 and MedAlpaca without clear justification. The reported accuracy metrics lack context and comparison to modern benchmarks, making it difficult to interpret their significance.\", \"a4\": \"Thanks for your comments. We are sorry to have selected the outdated models since we can only use the local computational resources. We also have noticed that ANTHROPIC released a statistical approach to model evaluations. We are working on this approach to providing modern benchmarks to AI4Cohort area. Sorry again, we are make reviewer to feel difficult to interpret significance.\", \"q5\": \"There are no error bars or confidence intervals, making the reliability of the results hard to assess. These are essential to determine how robust the findings are in medical contexts where consistency is critical.\", \"a5\": \"Thanks for your comment. As answer 4 mentioned, we are working on the statistical approach to model evaluations provided by ANTHROPIC. Before we noticed this approach, we were sorry that we think too simple to evaluate those models, though we had searched the related research area.\", \"q6\": \"The ethical concerns around using cohort-specific medical data are barely discussed despite the sensitivity of such data. More focus is needed on privacy risks and potential biases in using large language models on this data.\", \"a6\": \"Thanks for your comment. Cohort data sensitivity and participant privacy are also among the issues of concern. All cohort data is restricted to local storage, and training data consists of unidentified-unique ID with feature value. The whole training process is not related to any personal information that could identify the person. We are also sorry that we can't represent the Ethical Approment since ICLR is not allowed to present any information that could identify authors.\"}", "{\"summary\": \"The text does not flow well and I feel like sometimes sentences do not logically follow each other. The text is poorly formatted and a lot of details are missing. I have to say that the language is formatted in such a way that I am probably missing the the point of the paper. It feels like large parts have been written/corrected by an LLM. This is the case in the entire text.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Used existing biomedical models in experiments.\", \"Collected a new dataset to use in the experiments (details are missing about this dataset, though).\"], \"weaknesses\": [\"line 275: why is this seperate line here?\", \"402: spelling error: superviesd\", \"Figure 2: the text overlaps with the bars. Also predication is meant to be prediction?\", \"line 486: is supposed to be a section heading?\", \"Line 522: multiple links/garbled (?) text\", \"No contextualization/related work of PROM which is only fully mentioned in the abstract. This should be part of the related work section and then again in the discussion section.\", \"Line 151: why cite the SHAP paper?\", \"The PROM use case is not explained well, and it is not clear how this would contribute to the wider ICLR community. There needs to be a generalization of the knowledge.\"], \"questions\": [\"I don't understand the \\\"husband education level\\\" example for a biomedical LLM. Why would this be relevant to the task(s) that you are trying to solve? Again, the lack of related work and the explanation of the application could help here.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to reviewer concerns Part3\", \"comment\": \"Q14: The baseline comparison is only with LLMs, but a comparison to simpler machine learning models would be useful. Traditional models are often preferred in healthcare for their interpretability, and their inclusion would provide a clearer picture of the method's advantages.\", \"a14\": \"Thanks for your comment. We compare LLM performance with Random forest, XGBoost, LightGBM, and Logistic regression. We are also choosing the support vector machine, but it hasn't finished training yet, as the deadline is approaching.\\nThe dataset is split into an 80% training set and a 20% test dataset; before the split, it was balanced by the SMOTH method.\\nThe XGBoost was the most accurate, surpassing the LLM approach with a 97.9% accuracy in 40 factors in the test dataset. \\nThe Random Forest has the best accuracy among the four models in 78 factors and has 73.2% accuracy in the test dataset.\\nThe most traditional method, Logistic regression, is 52% accurate in 40 factors and 51.4% accurate in 78 factors.\\nWe are grateful to the reviewer for reminding us to add the comparison to other machine-learning models.\\nWe will add those results in detail to the appendix in the next version of our paper.\", \"q15\": \"The dataset is from a specific cohort, but the findings' generalizability to other datasets is not discussed. Testing on diverse datasets would strengthen the paper's claims.\", \"a15\": \"Thanks for your comments. We are willing to test this method in the dataset, as we mentioned in answer 1. However, our initial aim is to enlarge the factor counts in the cohort analysis that were never reached before inference by LLMs. We are sorry we don't have sufficient time to collect the related paper to become training tokens to update the LLMs' knowledge like what we performed in the MIH for the UKBiobank, NHANES and, GBD, etc. We will continue to acclimate more up-to-date medical knowledge in different areas to test the generalizability. Lastly, we would be grateful if you would consider the answer above to evaluate our research. Your questions will be addressed in the next version of our article.\"}", "{\"title\": \"Reply to reviewer concerns Part2\", \"comment\": \"Q7: Choosing 78 factors is not clearly mentioned. There's no explanation for why these factors were selected or how they were deemed necessary, making it difficult to assess this approach's significance.\", \"a7\": \"Thanks for your comment. As we answer in the Q1. For the traditional risk assessment approach, the researchers are only focused on the factors mentioned in the golden sample questionnaires. Therefore, other factors will be ignored, and the risk assessment frequently fails in the PROM risk evaluation. Based on this motivation, we are trying to collect as many factors as possible to build the larger landscape for PROM with the LLM to discover the risk factors case by case.\", \"q8\": \"The manual prompt engineering process is impractical for large-scale settings or dynamic settings. Its lack of generalizability across conditions or datasets is a significant limitation, and the adaptability of these prompts in new domains isn't discussed.\", \"a8\": \"Thanks for your comment. We are sorry that we are not mentioned. We only write 100 prompts in the manual. We are grateful that the reviewer was concerned about the limitations. We will try to apply this approach in oncology using mRNA data to let LLMs predict drug response.\", \"q9\": \"The automatic prompt generation pipeline lacks discussion on potential errors or biases, especially in medical applications where accuracy is crucial. The risks of generating irrelevant content aren't addressed.\", \"a9\": \"Thanks for your comment. We have deployed the guard LLM model( llama guard-7b) during the automatic generation, and since this is a retrospective cohort experiment, we are not applying human expert checking case by case. We are applying human expert checking in the following real-world testing, but it is not included in the article. Thanks for your reminder. We will address the hallucinations and misinformation in the next version of our article.\", \"q10\": \"The interaction map may not scale well as more factors are added. As complexity increases, overfitting becomes a risk, but the paper doesn't discuss how to manage this or mitigate it such as regularization techniques.\", \"a10\": \"Thanks for your comment. As we mentioned in answer 1, there may not be larger counts of factors as we performed in the current PubMed database(We have double-checked it). As complexity increases, we will consider using the graph to represent the participant as this method worked in the recommended system, and we will try to collect more factors than 78 to test this hypothesis.\", \"q11\": \"This subject cannot be ignored. -Bias and fairness issues in the data aren't addressed. Without steps to ensure fairness, the model may propagate existing biases in the training data, leading to unequal outcomes for different patient groups.\", \"a11\": \"Thanks for your comment. The basis and fairness concern is one of the issues we have been most concerned about since we researched MIH for decades. We have been considering for a long time how to represent fairness and impartiality without going against the double-blind review procedure, in which we can't display either detailed ethical approval or detailed generations for unidentified participants to prove that the model has been ensured fairness. We are thankful for your concern, which will be solved as we submit the detailed ethical approval for the next version of our article.\", \"q12\": \"The cohort-specific prompts could be overfitted to the dataset and fail to generalize to new cohorts. There's no discussion on how adaptable these prompts are across different settings.\", \"a12\": \"Thanks for your comment. Since our training process aims to update the current MIH knowledge, we are not training the detailed cohort feature value. There is a slight chance of overfitting on the related paper. All the LLMs inference settings are defaulted in the Pytorch framework. We gratefully thank the reviewer for reminding our representation may mislead the reader that we are training the LLMs on cohort data.\", \"q13\": \"The computational cost of fine-tuning large models isn't considered. Training these models is resource-heavy, and the paper doesn't quantify the time or hardware needed, limiting its practicality in clinical environments.\", \"a13\": \"Thanks for your comment. We are very grateful that the reviewer is concerned about the insufficient computation resources in the current BioMed area. We are using RTX 4090 Dual for PEFT the OpenBiollama in 15 days.\\nWe are really understanding the insufficient computation resources in clinical environments, but we believe two RTX 4090 has practicality in the most of hospitals\\nWe are sorry about the missing representations. We will add those details to the appendix in the next version of our article.\"}", "{\"title\": \"Reply to reviewer concerns\", \"comment\": \"Thanks for your comment\\n\\nWe are sorry that latex's green hand usage challenged your interpretation of our study. We respect your efforts to review our work.\", \"q1\": \"Line 151: why cite the SHAP paper?\", \"a1\": \"Thanks for your comment. The factor-based interaction map is developed based on the part of the SHAP interpretable function, which also uses game theory to understand the model and data. Therefore, to maintain basic respect, we cite the SHAP paper naturally.\", \"q2\": \"I don't understand the \\\"husband education level\\\" example for a biomedical LLM. Why would this be relevant to the task(s) that you are trying to solve? Again, the lack of related work and the explanation of the application could help here.\", \"a2\": \"Thanks for your comment. Husband's education level is a traditional risk factor for pregnant women. We want to utilize this feature annotation process to represent how Generation based on Medical Language Model worked.\\nWe are sorry about the lack of further explanation.\\nTraditional PROM risk assessments rely solely on predefined questionnaire factors, overlooking potential contributors and leading to frequent inaccuracies. To address this limitation, we are utilizing a Large Language Model (LLM) to comprehensively identify a broader range of risk factors on a case-by-case basis, creating a more complete understanding of PROM risk.\\nWe appreciate the reviewers' insightful comments and valuable suggestions regarding the validation of our method on benchmark datasets. We acknowledge the concern regarding using our dataset rather than widely adopted benchmarks like MIMIC, UKBiobank, GBD, or NHANES.\\nThe primary challenge in utilizing these public datasets stems from the significantly higher dimensionality of non-public cohorts like we collected. Our data features over 70 factors contributing to a single consequence. In contrast, the publicly available datasets we examined exhibit considerably lower factor counts for their respective consequences: UKBiobank (maximum of 21 factors for BMI), MIMIC (maximum of 35 factors for EEG), GBD (maximum of 30 factors for total cancer rate), and NHANES (maximum of 45 factors for lung function tests). This substantial difference in dimensionality poses a significant obstacle to directly applying our method to these benchmark datasets. Our approach is specifically designed to handle the complexity inherent in high-dimensional data, and scaling the factor count more than the benchmark datasets would potentially compromise the integrity and interpretability of risk access.\\nFurthermore, our limited computing resources currently prevent us from processing the massive scale of datasets like MIMIC or UKBiobank in their entirety. This constraint restricts our ability to test our method on these resources now. We acknowledge that these limitations may impact the generalizability of our findings. However, we believe that the unique characteristics of our dataset, specifically the high factor count, present a practical method to explore the relationships between a large number of variables and the outcome of interest in the other same-scale non-public cohorts.\", \"q3\": \"It feels like large parts have been written/corrected by an LLM\", \"a3\": \"Thanks for your comment. We are sorry we are missing a declaration of Grammarly usage for Polish purposes. This declaration will be added in the next version of the article.\\n\\nWe respectfully submit the above response for your evaluation of our research. \\nA revised version of the article will incorporate your questions.\\n\\nAgain, I'm sorry for our green hand latex usage.\"}", "{\"title\": \"Reply to rebuttal\", \"comment\": \"Dear Authors,\\n\\nThank you for your (late) reply to my review. I would suggest, for your next submission, getting feedback from a Native English speaker to make sure what you have written accurately represents what you did. Moreover, since I have not seen an improved version of the paper, I will keep my score. Thanks to everyone for their efforts.\"}", "{\"summary\": \"This paper states an approach for improving medical cohort analysis using LLMs, especially by incorporating low-frequency factors that are usually not considered in traditional methods. They propose a method for developing prompts via both manual and automatic techniques, using methods such as MLM and cohort-specific prompts to improve risk prediction in medical data. The study explores the use of models such as LLama3.1 and MedAlpaca, among others, and evaluates their performance on different factor sets, showing better accuracy, especially in the prediction of PROM. Key contributions include the introduction of a factor-based interaction map to represent the relationships between individual factors and architectural improvements designed to reduce overfitting. The results show that the proposed method outperforms traditional supervised baselines, which provides a better tool for medical cohort analysis while addressing challenges in the LLM-based models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Originality: This paper introduces an approach by focusing on low-frequency factors in medical cohort analysis, which are often ignored. Using manual and automatic prompt generation with LLMs like LLama3.1 and MedAlpaca is relatively creative, as is the factor-based interaction map.\", \"quality\": \"The paper provides partial technical depth, with some explanations of the prompt generation techniques and factor-based interaction map.\", \"clarity\": \"The paper is primarily clear, particularly in describing the prompt generation and the interaction map.\", \"significance\": \"By incorporating low-frequency factors, the paper has the potential to significantly improve risk prediction in healthcare, such as in the prediction of premature rupture of membranes.\", \"overall_comment_on_strengths\": \"This paper makes a good contribution, especially in applying LLMs to low-frequency factors in healthcare.\", \"weaknesses\": \"The paper doesn\\u2019t compare the mentioned method with traditional machine learning models, which would highlight the specific advantages of the proposed approach. The computational cost and feasibility of automatic prompt generation aren\\u2019t discussed, leaving doubts about its practicality in clinical settings. The experiments are based on only one dataset, restricting the generalizability of the findings. Also, overfitting is a risk with the factor-based interaction map, but this paper lacks a discussion on a few mitigation strategies, such as regularization techniques. Some terms throughout the paper are not clear and need further clarification for non-expert readers. The explanations for evaluation metrics are too narrow and centered around accuracy without reporting other critical metrics such as AUC. Some ethical concerns and potential biases in the data are mentioned but not completely addressed and discussed.\", \"questions\": \"-The abstract section mentions 79% accuracy with 78 factors and 96% with 40 factors but doesn't give a context on how these results compare to benchmarks or their clinical significance. The term \\\"numerical interpretable evidence\\\" is unclear.\\n-In the introduction section, the paper states that traditional studies overlook most of the factors but doesn't elaborate on why adding so many factors is necessary or even how this can lead to better outcomes. \\n-The factor-based interaction map does not explain why it was chosen and how it improves performance. There is also no discussion on scalability, overfitting, or managing complexity as more factors are added.\\n-The automatic prompt generation method's scalability is mentioned, but computational costs and feasibility for clinical use are missing. There's no mention of time or resources required, making it hard to assess practical viability.\\n-The results rely on outdated models like LLama3.1 and MedAlpaca without clear justification. The reported accuracy metrics lack context and comparison to modern benchmarks, making it difficult to interpret their significance.\\n-There are no error bars or confidence intervals, making the reliability of the results hard to assess. These are essential to determine how robust the findings are in medical contexts where consistency is critical.\\n-The ethical concerns around using cohort-specific medical data are barely discussed despite the sensitivity of such data. More focus is needed on privacy risks and potential biases in using large language models on this data.\\n-The conclusion overstates the method's applicability, claiming it provides a universal solution despite showing only incremental improvements. The method's broader relevance to other conditions or datasets isn't convincingly demonstrated.\\n-Choosing 78 factors is not clearly mentioned. There's no explanation for why these factors were selected or how they were deemed necessary, making it so difficult to assess the significance of this approach.\\n-The manual prompt engineering process is impractical for large-scale settings or dynamic settings. Its lack of generalizability across conditions or datasets is a significant limitation, and the adaptability of these prompts in new domains isn't discussed.\\n-The automatic prompt generation pipeline lacks discussion on potential errors or biases, especially in medical applications where accuracy is crucial. The risks of generating irrelevant content aren't addressed.\\n-The interaction map may not scale well as more factors are added. As complexity increases, overfitting becomes a risk, but the paper doesn't discuss how to manage this or mitigate it such as regularization techniques. This subject cannot be ignored.\\n-Bias and fairness issues in the data aren't addressed. Without steps to ensure fairness, the model may propagate existing biases in the training data, leading to unequal outcomes for different patient groups.\\n-The cohort-specific prompts could be overfitted to the dataset and fail to generalize to new cohorts. There's no discussion on how adaptable these prompts are across different settings.\\n-The computational cost of fine-tuning large models isn't considered. Training these models is resource-heavy, and the paper doesn't quantify the time or hardware needed, limiting its practicality in clinical environments.\\n-The baseline comparison is only with LLMs, but a comparison to simpler machine learning models would be useful. Traditional models are often preferred in healthcare for their interpretability, and their inclusion would provide a clearer picture of the method's advantages.\\n-The dataset is from a specific cohort, but the findings' generalizability to other datasets is not discussed. Testing on diverse datasets would strengthen the paper's claims.\", \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns', 'Yes, Privacy, security and safety']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BwlEfAhUVX
SEED-X: Multimodal Models in Real World
[ "Yuying Ge", "Sijie Zhao", "Jinguo Zhu", "Yixiao Ge", "Kun Yi", "Lin Song", "Chen Li", "Ying Shan" ]
The rapid evolution of multimodal foundation models has showcased remarkable capabilities in vision-language understanding and generation, yielding impressive results on academic benchmarks. However, there remains a gap in their progress toward real-world applicability, primarily due to the models' limited capacity to effectively respond to various user instructions and interact with diverse visual data. This limitation can be attributed to the fundamental challenge of modeling multi-granularity visual semantics for comprehension and generation tasks. In this paper, we take a pioneering step towards applying multimodal foundation models in an open-world context and present a unified and versatile foundation model, namely, $\textbf{SEED-X}$. As the first of its kind, SEED-X seamlessly integrates two essential features: (1) comprehending images of arbitrary sizes and ratios, and (2) enabling multi-granularity image generation. Besides the competitive results on public benchmarks, SEED-X demonstrates its effectiveness in handling real-world applications across various domains. We hope that our work will inspire future research into what can be achieved by versatile multimodal foundation models in real-world applications. All models, training, and inference codes are available at https://anonymous.4open.science/r/SEED-X/.
[ "Multimodal LLM", "Comprehension and Generation" ]
Reject
https://openreview.net/pdf?id=BwlEfAhUVX
https://openreview.net/forum?id=BwlEfAhUVX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kh4a45EuOD", "k3Z8WXg42m", "du4cwsCzgd", "djBgOQafZd", "JzYKCew5Dr", "5qH0BjuGdt", "5VLg3AKSPy" ], "note_type": [ "official_review", "official_review", "official_review", "meta_review", "official_review", "official_review", "decision" ], "note_created": [ 1731152508242, 1731065014168, 1730376437773, 1734584462701, 1730263414567, 1730787065238, 1737523535157 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2836/Reviewer_Agua" ], [ "ICLR.cc/2025/Conference/Submission2836/Reviewer_WNpz" ], [ "ICLR.cc/2025/Conference/Submission2836/Reviewer_f1AY" ], [ "ICLR.cc/2025/Conference/Submission2836/Area_Chair_LV7H" ], [ "ICLR.cc/2025/Conference/Submission2836/Reviewer_CjSH" ], [ "ICLR.cc/2025/Conference/Submission2836/Reviewer_69AZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents SEED-X, a VLM that is designed for both multimodal understanding and AIGC. The key idea is to leverage a pretrained query-based visual decoder and inject it into the VLM for image generation and understanding training at the same time. The experiments show that SEED-X achieves promising results on both image generation and understanding.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The problem of pursuing joint image understanding and generation with LLMs seems new and relevant, which would be of value to both the community and the industry.\", \"The method is pretty simple, but it might not be easy to follow as a visual decoder has to be pretrained.\", \"The results look promising.\"], \"weaknesses\": [\"**Novelty**. The proposed method looks like a combination of Emu which regresses continuous visual tokens and DreamLLM which uses continuous queries as image generation conditions. However, I am not saying the efforts of such exploration should not be encouraged.\", \"**Writing**. In the method part, it is quite unclear about every design's motivation. There is not a single reference in this part, which may lead to confusion on method detail and differences from previous works. The authors should carefully discuss the technical motivation and the literature comparison.\", \"**Evaluation**. Current experiments only consist of MMBench and MME. More classical benchmarks like typical VQA including VQAv2 and advanced benchmarks like MM-Vet should be used.\"], \"questions\": \"Can SEED-X be used for video understanding? For example, compare with VILA-U on MSVD-QA and TGIF-QA.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents SEED-X, an advanced multimodal foundation model designed for enhanced real-world applicability in both comprehension and generation across diverse user inputs. SEED-X builds upon prior work, SEED-LLaMA, and addresses two main challenges: (1) understanding images of varying sizes and aspect ratios and (2) facilitating multi-granularity image generation. These features enable SEED-X to handle high-level creative generation tasks and precise image manipulation. The model incorporates a visual tokenizer and a novel de-tokenization approach, enhancing image fidelity and allowing for detailed editing based on conditional inputs. SEED-X also supports dynamic resolution image encoding, enabling seamless processing of images with arbitrary dimensions without compromising visual details. SEED-X was pre-trained on extensive multimodal data and underwent instruction tuning across various domains, resulting in specialized versions such as SEED-X-Edit, SEED-X-PPT, and SEED-X-Story, each tailored for specific applications. In evaluations, SEED-X achieved competitive results in both comprehension and generation benchmarks, demonstrating its robustness across multimodal large language model (MLLM) benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper tries to solve a very interesting and fundamental problem, a vision-language multimodal foundation model that enables understanding and generation, and shows a bunch of very interesting application scenarios after instruction tuning like image editing, comprehending and generation.\\n\\n2. While not entirely optimal, the proposed designs, including the visual tokenizer and the image encoding of dynamic resolution are both reasonable.\\n\\n3. For the visual representation, currently SEED-X leverages the continuous tokens predicted from the learnable queries and optimized with regression loss. How about using discrete tokens and next-word prediction objectives? It seems like this way will better unify the language and the image representation. More discussions could also be incorporated here.\", \"weaknesses\": \"1. The Dynamic Resolution Image Encoding section is very interesting. However, the current processing way still inherits some drawbacks, for example we still need to concat the features of all larger patches and resized global image, which will inevitably result in the increased sequence length and contain redundant information. It is also important to show more ablation studies here to show its more significance.\\n\\n2. What is the motivation of the visual de-tokenizer training in the second stage and how will it help? As shown in Figure 3, SEED-X uses the image editing data to further finetune the visual detokenizer, hence the conditional image has some differences with the reconstruction target, for example, the removed dog shown in the example. I partially agree with the claim of Section 3.1 that the given reference could help recover more fine-grained details, but somehow the editing capability which should be fully provided by the large multimodal model actually came from the UNET of SD-XL to some extent, since most of the time we only want the tokenizer to compress the signal rather than modify the signal. Could the authors share any ablation study to see how it will impact the large model capability?\\n\\n3. During inference, when the model wants to generate a new image I wonder if the previous image in this sequence is also required to send to the de-tokenizer as a condition?\\n\\n4. In Figure 4, may I ask what is the white token (between IMG and the regressed image features)?\\n\\n5. The writing of the paper could be improved as some parts are very confusing and high-level, I would recommend to include more details regarding the implementation and the training .\", \"questions\": \"See the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces SEED-X, a multimodal base model that aims to improve the applicability of models in real-world applications by unifying multi-granularity understanding and generation capabilities. SEED-X introduces two key features: understanding images of arbitrary sizes and scales, and supporting multi-granularity image generation (including high-level instruction image generation and low-level image manipulation tasks). The paper shows the competitiveness of SEED-X on public benchmarks and demonstrates its effectiveness in multiple real-world applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The paper has a clear structure and coherent logic, and the ideas and thoughts are clearly presented through diagrams.\\n2) SEED-X receives and outputs arbitrary-size images, making it more useful in real open-world scenarios.\\n3) Compared to current MLLM methods, SEED-X integrates comprehension and generation ability simultaneously, e.g. detection, dynamic res img input, image gen, and high-precision editing. Make it Generalist actually\", \"weaknesses\": \"1\\uff09 Lack of quantitative ablation study, in Sec4.3 ablation study, the authors perform ablation studies on the training of visual de-tokenizer and the pre-training of SEED-X and visualize the results of the ablation study. However, they lack the quantitative ablation study of their framework, e.g.: the image gridding operation which is claimed to support arbitrary size and aspect ratios. This operation contributes to one of their motivations, but lacks necessary quantitative analysis.\\n\\n2) As shown in Tab.2 the experiment results compared to other MLLM methods on MLLM benchmarks are not competitive.\", \"questions\": \"1 As a general MLLM, the training and inference cost is essential, since the proposed model composes more functionality than other models, these details need to be clarified.\\n\\n2 This paper conducts experiments on MME. MMB and SEED benchmarks. How\\u2019s its performance on general tasks for MLLM like VQA and cross-modal retrieval? The experiments did not show its general performance on MLLM tasks\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents SEED-X a multimodal model that (1) comprehends images of arbitrary sizes and ratios, and (2) enables multi-granularity image generation. The paper received scores of 3,5,5,6,5. The reviewers found some aspects of the proposed problem and approach interesting. However, the critical issues that were raised include limited novelty, insufficient experiments, and issues with clarity. However, there was no rebuttal. The AC agrees with the reviewers' concerns, and recommends reject.\", \"additional_comments_on_reviewer_discussion\": \"No rebuttal provided, and there was no further discussion between the reviewers and AC.\"}", "{\"summary\": \"The authors present a unified and versatile foundation model, namely, SEED-X. SEED-X integrates two features: (1) comprehending images of arbitrary sizes and ratios, and (2) enabling multi-granularity image generation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors present a unified and versatile foundation model, namely, SEED-X. SEED-X integrates two features: (1) comprehending images of arbitrary sizes and ratios, and (2) enabling multi-granularity image generation.\", \"weaknesses\": \"1. The authors claim their results in Figure 1 and Figure 5 come from a \\\"unified and versatile foundation model,\\\" but it seems these results are from different instruction-tuned models (SEED-X-Edit, SEED-X-PPT, etc.). This could mislead readers into thinking a single model handles all functionalities.\\n2. The authors only compare their approach with a few related works. \\n3. Results are only provided for MMB, SEED-Bench-2, and MME. Other benchmarks like VQA, MM-Vet, MMMUv, and MathVista are missing.\\n4. The model is pre-trained on LLaMA2, which is outdated. This raises concerns about the results being state-of-the-art, especially compared to newer models like LLaMA 3.2-Vision.\", \"questions\": \"1. For Figure 1 and Figure 5, the author claims that the results presented come from a \\\"unified and versatile foundation model,\\\" but also mentions \\\"after instruction tuning.\\\" I am curious whether the results of these different functionalities come from the series of instruction-tuned models mentioned in Sec 3.3.2, including SEED-X-Edit, SEED-X-PPT, SEED-X-Story, and SEED-X-Try-on. If they do indeed come from these different models, I believe the author's phrasing is attempting to mislead readers into thinking that these results come from a single \\\"unified and versatile foundation model.\\\"\\n2. The term \\\"MMB Single\\\" in Table 2 refers to selecting questions from MMBench containing only one image? Why was this experimental setup chosen? The classification method I found in the original MMBench paper seems to be Overall, CP, FP-S, FP-C, AR, LR, RR. How do Seed-X and Seed-X-I perform on these subclasses compared to SOTAs?\\n3. In Table 2, it seems that comparisons are made with only a limited number of related works. Many of the related works mentioned in Table 1 and the results tested on benchmarks like MMBench have not been compared. What is the reason for not comparing with these works?\\n4. It seems that the authors only validated their approach on MMB, SEED-Bench-2, and MME, but what about the results on other benchmarks such as VQA, MM-Vet, MMMUv, MathVista, etc? \\n5. The authors pre-trained from LLaMA2-Chat-13B using LoRA, but LLaMA2 is already somewhat outdated, which raises concerns about whether the results are truly state-of-the-art. For example, how competitive is Seed-X, which is based on LLaMA2 pre-training, when compared to models like LLaMA 3.2-Vision?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors claim that they present a framework named SEED-X that integrates image comprehension and generation capabilities. SEED-X demonstrates promising applications for real-world scenarios, supporting comprehension and generation with arbitrary sizes and aspect ratios via visual tokenization/de-tokenization and dynamic resolution encoding.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.This paper introduces visual tokenization and de-tokenization method to support image generation and high-precision image manipulation.\\n\\n2.This paper proposes dynamic resolution image encoding module which allows for the processing of images with various resolutions, enhancing the model\\u2019s adaptability to diverse real-world applications.\\n\\n3.The proposed method integrates the image comprehension and generation into a single foundation model, which can be applicable in realworld scenarios.\", \"weaknesses\": \"1. Grammar Mistake: This paper is filled with numerous grammatical mistakes, which severely compromise its quality. Here are some examples.Ep1: The use of singular and plural forms of \\u201cwork\\u201d is in chaos. In Line 81, it has \\u201cSome pioneering work\\u201d while in Line 99, it owns \\u201cnone of the previous works\\u201d.Ep2: The form of verb has not been used correctly. Such as in Line 209, \\u201cwhich effectively incorporate the aforementioned characteristics for real-world applications ...\\u201d should be changed as \\u201cwhich effectively incorporates the aforementioned characteristics for real-world applications ...\\u201d\\n\\n2. Missing Component: I couldn\\u2019t find the summary of the authors\\u2019 contributions, which usually can be found in the end of the Introduction section. It makes me confused about the authors\\u2019 contributions.\\n\\n3. Unknown Structure: In this paper, the authors only depict the independent modules in the method, while lacking of the comprehensive and total processing pipeline of the method. For example, the authors may provide with an architecture figure about their method and introduce the processing pipeline about image comprehension and generation step by step, which can be better for the readers to understand their method.\\n\\n4. Extra Experiment: In the ablation study, 1) the authors only provide the visualization results without numerical metrics which could better reflect the comprehensive performance. 2). the authors only conduct the experiments belong to the image generation, while image comprehension is also significant in their settings. 3). the authors only ablate the number of learnable queries, which can not reflect the effectiveness of their proposed modules: visual tokenization and de-tokenization, dynamic resolution image encoding.\\n\\n5. Insufficient Innovation: I concern about the innovations about this paper from the following perspectives: 1). The two proposed innovations are disconnected from each other, and there is no strong correlation in the paper. 2). The authors propose a grand blueprint, while their innovations look like a bit ordinary, especially considering about their moderate performance compared with GPT.\", \"questions\": \"Please kindly refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Please kindly refer to the weakness.\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
Bwhd7GUyHH
LNUCB-TA: Linear-nonlinear Hybrid Bandit Learning with Temporal Attention
[ "Hamed Khosravi", "Ahmed Shoyeb Raihan", "Srinjoy Das", "Imtiaz Ahmed" ]
Existing contextual multi-armed bandit (MAB) algorithms struggle to simultaneously capture long-term trends as well as local patterns across all arms, leading to suboptimal performance in complex environments with rapidly changing reward structures. Additionally, they typically employ static exploration rates, which do not adapt to dynamic conditions. To address these issues, we present LNUCB-TA, a hybrid bandit model that introduces a novel nonlinear component (adaptive $k$-Nearest Neighbors ($k$-NN)) designed to reduce time complexity, and an innovative global-and-local attention-based exploration mechanism. Our method incorporates a unique synthesis of linear and nonlinear estimation techniques, where the nonlinear component dynamically adjusts $k$ based on reward variance, thereby effectively capturing spatiotemporal patterns in the data. This is critical for reducing the likelihood of selecting suboptimal arms and accurately estimating rewards while reducing computational time. Also, our proposed attention-based mechanism prioritizes arms based on their historical performance and frequency of selection, thereby balancing exploration and exploitation in real-time without the need for fine-tuning exploration parameters. Incorporating both global attention (based on overall performance across all arms) and local attention (focusing on individual arm performance), the algorithm efficiently adapts to temporal and spatial complexities in the available context. Empirical evaluation demonstrates that LNUCB-TA significantly outperforms state-of-the-art contextual MAB algorithms, including purely linear, nonlinear, and vanilla combination of linear and nonlinear bandits based on cumulative and mean rewards, convergence performance, and demonstrates consistency of results across different exploration rates. Theoretical analysis further proves the robustness of LNUCB-TA with a sub-linear regret bound.
[ "Contextual Multi-Armed Bandit", "Exploration-Exploitation Trade-off", "Adaptive k-Nearest Neighbors (k-NN)", "Attention-Based Exploration Rate", "Sub-linear Regret" ]
https://openreview.net/pdf?id=Bwhd7GUyHH
https://openreview.net/forum?id=Bwhd7GUyHH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zrC047g7f8", "zKUT3mohZy", "wyLCxrcwTf", "vX9uBDYdjN", "ue3x23xzma", "sYqwQctm12", "p1loFlSok7", "n4quTK1YKZ", "mpcm0fzzqS", "lh0Mmtgln7", "koWdAW7nMI", "knjhoup0tw", "iHol2IVIXo", "g9EXKMGZsL", "eNDdhVoPgj", "dn84ROr6mR", "bFrOVnGYIt", "ZH4pFqs7WL", "YiWIAqPUlW", "YIjHF9i6Xz", "YDMzftThWW", "RbbvaQLVYg", "RA7VWWhtxX", "Ne5QqCg1GP", "JOBGEVDSGW", "HyU0OueUZf", "HwiyWVwNco", "God2CjKkGB", "GV4PIBReiW", "EAwjdf7HKE", "CghEMhCwMK", "BAjIOFqxFw", "9CFKqVmjJl", "4dLMblWbhB", "40rsYAYc6M", "40eRU1ukXJ", "3wI96E4xq6", "1daqgjjUMx", "0RYaOuGOde" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732396161461, 1732418519854, 1732500553035, 1732004700707, 1732000312020, 1731992434555, 1732002666751, 1732396136242, 1730522078017, 1731998820616, 1732417638668, 1732003605167, 1732403070359, 1731997405414, 1732733653040, 1732419174851, 1732425736513, 1730451644059, 1731992246145, 1732421900599, 1730709717872, 1732396221833, 1731991778217, 1732003755430, 1731995412610, 1731996348421, 1733247307675, 1732412523347, 1730715101237, 1731992623512, 1732574478399, 1732396201687, 1732001353346, 1731994476881, 1732428838780, 1732400848519, 1732573264511, 1732419268246, 1731993384376 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Reviewer_j6kr" ], [ "ICLR.cc/2025/Conference/Submission11860/Reviewer_AgPg" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Reviewer_j6kr" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Reviewer_AgPg" ], [ "ICLR.cc/2025/Conference/Submission11860/Reviewer_AgPg" ], [ "ICLR.cc/2025/Conference/Submission11860/Reviewer_1LHD" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Reviewer_7MDe" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Reviewer_j6kr" ], [ "ICLR.cc/2025/Conference/Submission11860/Reviewer_AgPg" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Reviewer_1LHD" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ], [ "ICLR.cc/2025/Conference/Submission11860/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer 7MDe,\\n\\nWe would like to express our sincere gratitude for your valuable insights and suggestions on our work. We have tried our best to address the concerns and queries you raised during the rebuttal process. However, we would greatly appreciate knowing whether our response has effectively resolved your doubts. Your feedback will be instrumental in improving the quality of our work. As the end of the discussion period is approaching, we eagerly await your reply before the end.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for the quick response.\", \"my_question_for_the_k_nn_is_then_more_valid\": \"the problem formulation should stand alone, i.e., we do not need to talk about the algorithm when we formulate a problem. In this way, we can define the optimality and learning target, and then design an algorithm to solve it. Here, the problem formulation and algorithm design seem to be mixed together, which forms a logistic loop.\\n\\nRegarding the contexts, I do understand more clearly the setting that the authors want to address.\"}", "{\"comment\": [\"I think we have differing opinions on what the relevant baselines are, and therefore I leave it at the good judgement of the Area Chair. I summarize my primary concern for them below.\", \"**Since the true reward function makes an assumption on its structure, and the authors claim it adresses nonlinear variations in reward functions and adapts to nonstationary environments, existing neural bandit algorithms, and the algorithms for non-stationary environments (as referred in my previous responses) that have been proposed for such scenarios, are necessary baselines. Even if the proposed algorithm is not a neural network based, one needs convincing evidence that these existing algorithms are sub-par across environments.**\", \"Although I appreciate the efforts put in to provide theoretical bounds, such an analysis for a specifc structural assumption on the reward function would not be of much significance to the community if existing algorithms already provide empirical success and provable regret performance without such an assumption.\", \"I suspect that with proper hyper parameter tuning (including the number of layers, width of the network and step-size) Neural UCB and other neural bandit algorithms that the authors did not compare against will provide better empirical bounds. I would encourage the authors to run these and provide code for further validation.\", \"Regarding the Tables, I apologize if the authors are offended by the comment, but I still think the table as a whole is a pile of text and each of the individual entries in Table 1 are terse and abrupt to infer much from, while Table 2 seems unnecessary without empirical justifications as to why the reward modeling is required (this relates to my primary concern stated above).\"]}", "{\"title\": \"Response to Reviewer 1LHD (Part 5/5)\", \"comment\": \"**Continue of Q5 (Benchmarks):**\\n\\n- **LinUCB and LinTS** used the linear contextual features (same as part of our model) but did not include the nonlinear k-NN adjustments. \\n This limitation led these models to capture only global trends while failing to adapt to local variations in the reward structure.\\n\\n\\n#### Why k-NN UCB and k-NN KL-UCB?\\n\\n- **Purpose:** In these models, non-linear, local adjustments are made based on nearest neighbors without taking into account global reward trends.\\n- **Relevance to Our Setting:** Our analysis of k-NN UCB and k-NN KL-UCB as baselines shows that while these models are effective in capturing localized reward dependencies, they have limited ability to accurately model global reward trends. Therefore, they are suboptimal in settings where both global and local trends are important.\\n\\n\\n### Our Unique Contribution:\\nAs stated in the Contribution section (lines 95-128), using the insights from these benchmarks, we propose a hybrid approach that incorporates both local (non-linear) and global (linear) factors. Our model captures the following:\\n- **Through the linear component, Global Trends** ensures robustness to high-dimensional contexts and captures a wide range of rewards relationships.\\n- **Using k-NN based local adjustments,** we refine the reward estimation with context-sensitive nonlinear corrections, ensuring that our model outperforms both linear and nonlinear baselines by effectively balancing global and local information.\\n\\nRegarding Kernel-UCB and Neural-UCB Baselines, please refer to the answer of Q1 as we have analyzed these in the response to that question. Also, Response to Reviewer 1 (AgPg), Parts 3 and 4 provides an in detail comparison to the existing models. In summary, the selected benchmarks demonstrate the limitations of purely linear or non-linear models, underscoring the value of our unique hybrid synthesis. By integrating global and local perspectives, our approach addresses the shortcomings of existing methods, providing robust performance across dynamic and complex reward settings.\\n\\n\\n### Performance Comparison Table\\n| **Model** | **Exploration Rate (\\u03b1)** | **Cumulative Reward** | **Mean Reward** |\\n|-----------------|--------------------------|------------------------|-----------------|\\n| **LNUCB-TA** | 0.01 | **752** | **0.94** |\\n| **LNUCB-TA** | 0.1 | **741** | **0.93** |\\n| **LNUCB-TA** | 1 | **752** | **0.94** |\\n| Neural UCB | 0.01 | 726 | 0.90 |\\n| Neural UCB | 0.1 | 717 | 0.89 |\\n| Neural UCB | 1 | 722 | 0.90 |\\n| Kernel UCB | 0.01 | 479 | 0.60 |\\n| Kernel UCB | 0.1 | 414 | 0.52 |\\n| Kernel UCB | 1 | 446 | 0.56 |\\n\\nAlso, the table above demonstrates the comparison between our model and Kernel-UCB and Neural UCB on the news recommendation dataset used in the paper, highlighting that our model consistently outperforms these methods. While Neural UCB performs well in this dataset, we extended the analysis to compare it with our model on the AstroPh co-authorship network in terms of cumulative reward (y-axis) observed at 5% (Figure 7). Neural UCB achieves a cumulative reward of 8636 observed nodes, which is **approximately 12% lower** than the **9808** achieved by our model (LNUCB-TA). Kernel UCB, on the same dataset, achieves 9332 observed nodes, which is **4.85% lower than our model**. Moreover, even our proposed component combined with Epsilon Greedy (green line in Figure 7) surpasses Neural UCB with 8917 observed nodes, **approximately 3.5% higher** than Neural UCB.\\n\\nInterestingly, in the news recommendation dataset, Neural UCB outperformed Kernel UCB, whereas in the AstroPh co-authorship network, Kernel UCB showed better performance than Neural UCB. Despite these variations, **our model outperformed both methods across both datasets**, demonstrating its robustness and superior adaptability in diverse scenarios. We will provide the updated Figure 7 in our revised manuscript after incorporating all the reviewers\\u2019 comments.\"}", "{\"title\": \"Response to Reviewer j6kr (Part 4/4 )\", \"comment\": \"**W5 (Regret):**\\n\\n### Regret Analysis\\n\\nAs stated in lines 216-239, the regret for our model at any time step is based on the arm selected. Equation 5 of our manuscript describes regret calculation where $g^a ((x_t^a )^*,(z_t^a )^* )$ represents the optimal expected reward for arm $a$ at the optimal context $(x_t^a )^*$, which is the feature vector yielding the highest reward (lines 225-226). The function $o_t^a (x_t^a,z_t^a )$ represents the expected reward under the decision made by the policy $\\\\pi_t^a$ at context $x_t^a$ with reward history $z_t^a$. Given this framework, regret inherently accrues only for the arm that is actively selected at each timestep, and by default, it is zero for any arm not chosen (line 227).\\n\\n### Regret Definition in Eqn. (4) and (5):\\nThe total regret $R_T (\\\\pi)$ (Eqn. (4)) measures the cumulative performance difference between the chosen policy $\\\\pi$ and the optimal policy $\\\\pi^*$ across all time steps $t$ (lines 216-220). It sums the difference between the rewards obtained from the optimal arm $\\\\pi_t^*$ and the rewards obtained from the arm selected by the policy $\\\\pi_t$ at each time step $t$:\\n\\n$$\\nR_T (\\\\pi) = \\\\sum_{t \\\\in [T]} \\\\left( Y_t^{(\\\\pi_t^*)} - Y_t^{(\\\\pi_t)} \\\\right)\\n$$\\n\\nHere, $Y_t^{(\\\\pi_t^*)}$ and $Y_t^{(\\\\pi_t)}$ represent the realized rewards for the optimal and selected arms, respectively, at time step $t$. The aim is to quantify the difference between the performance of the optimal policy and the policy under consideration, which is the standard in the literature.\\n\\nIn Eqn. (5), the regret for a single arm $a$ at time step $t$ is defined as the difference between the optimal expected reward for that arm at the optimal context $(x_t^a)^*$ and the expected reward under the decision made by the policy $\\\\pi_t^a$ at context $x_t^a$:\\n\\n$$\\n\\\\text{regret}_t^a = \\\\Delta_t^a \\\\left( g_t^a ((x_t^a )^*,(z_t^a )^*) - o_t^a (x_t^a,z_t^a ) \\\\right)\\n$$\\n\\nHere, $\\\\Delta_t^a$ is the indicator function that takes the value 1 if arm $a$ is selected at time $t$, and 0 otherwise. This formulation uses expected rewards, which are computed with respect to the underlying model of expected reward based on the context and the history of rewards, rather than the realized reward .\\n\\n### Regret Expansion in Eqn. (6):\\nIn Eqn. (6), we expand the cumulative regret over all arms and time steps. The total regret $R_T$ can be written as the sum of the individual regrets for each arm, where each arm's regret depends on the difference between the optimal expected reward (including both the linear and $k$-NN terms) and the expected reward for the arm selected under the policy. \\nSpecifically, Eqn. (6) expresses the total regret as:\\n\\n$$\\nR_T = \\\\sum_{a=1}^A \\\\sum_{t=0}^T \\\\Delta_t^a \\\\left( (\\\\mu_t^a )^* \\\\cdot (x_t^a )^* + k\\\\text{-}NN_{k,t}^{a} ((x_t^a )^*,(z_t^a )^*) - \\\\mu_t^a \\\\cdot x_t^a + k\\\\text{-}NN_{k,t}^{a} (x_t^a,z_t^a ) \\\\right)\\n$$\\n\\nThe terms $(x_t^a )^*$ and $x_t^a$ refer to the optimal context and the chosen context at time $t$ for arm $a$, while $k\\\\text{-}NN_{k,t}^{a}$ refers to the $k$-NN function for arm $a$ that adjusts the reward based on past observations. The expectation is over the reward function's structure, not the realized rewards, as we are dealing with expected rewards rather than the observed ones.\\n\\n### Reward\\n- **Expected rewards** refer to the rewards predicted by the model (including both the linear and $k$-NN components). \\n- **Realized rewards** represent the actual rewards observed at each time step, which include noise and stochastic variations not captured by the model. \\n\\nThe regret analysis in Eqns. (4) and (5) is based on expected rewards, which is standard practice in theoretical bandit settings. The regret measures how well the algorithm performs compared to the optimal policy that knows the expected rewards.\\n\\n### Summary of Regret Analysis:\\n1. **Total Regret Definition (Eq. 4):** \\n Total regret is defined as the difference in rewards obtained by the optimal policy and the chosen policy over all time steps. This is a standard high-level measure of performance deviations. \\n\\n2. **Per-Arm Regret (Eq. 5):** \\n In this equation, the per-arm regret is calculated by comparing the expected reward under the optimal policy with the expected reward under the chosen policy. Both global linear and local $k$-NN influences are included in the per-arm regret. \\n\\n3. **Expanded Total Regret (Eq. 6):** \\n Specifically incorporates linear and $k$-NN components in the reward difference when aggregating the regret across all arms and time steps. As a result of this expanded form, the cumulative regret computation can be integrated with the hybrid reward model by connecting total regret to individual arm regrets. \\n\\nAs a result of these equations, the total regret is linked to the details of each arm and context, emphasizing the structure of the hybrid model as it applies to regret analysis.\\n\\n**W9 (Notation):**\\n\\nThank you for your valuable comment. We will incorporate this into the revised version.\"}", "{\"title\": \"Response to Reviewer AgPg (Part 3/4)\", \"comment\": \"**W2-4 (Literature)**\", \"table_1\": \"Comparing our model with the 10 referenced models, highlighting the unique aspects and contributions of our approach.\\n\\n| **Aspect** | **Our Model (LNUCB-TA)** | **Deep Bayesian Bandits Showdown** | **Ensemble Sampling** | **Neural Linear Bandits** | **Kernelized Contextual Bandits** | **NeuralUCB** | **Neural Thompson Sampling** | **Contextual Bandits with Online Neural Regression** | **Optimal Contextual Bandits with Regression Oracles** | **FALCON** | **Efficient First-Order Contextual Bandits (FastCB)** |\\n|--------------------------------------|--------------------------------------------------|-------------------------------------------------|-----------------------------------------------|--------------------------------------------|----------------------------------------|--------------------------------------------|-----------------------------------------------|-------------------------------------------------------|-------------------------------------------------------|-----------------------------------|-------------------------------------------------------|\\n| **Primary Approach** | Linear + non-linear (k-NN) with attention-based \\u03b1 | Bayesian posterior approximations with neural networks | Ensemble models for approximation in complex models | Neural network with linear exploration | Kernel-based for non-linear feature mapping | Neural network-based UCB | Neural Thompson Sampling with neural network | Online neural regression | Regression oracle-based contextual bandit | Reduction of contextual bandits to offline regression using least squares regression oracle. | First-order contextual bandit with regression oracle |\\n| **Exploration-Exploitation Adjustment** | Real-time adjustment of \\u03b1 with attention on global and local rewards | Static posterior samples | Static ensemble diversity-based | Fixed exploration parameter, memory-based updates | Fixed exploration parameter | Fixed UCB confidence intervals | Posterior sampling from reward distribution | Fixed parameter for exploration | Regression-based oracle | Adaptive via epoch-varying learning rate. | Fixed oracle-based weighting |\\n| **Non-Linearity Handling** | k-NN term for local non-linear adjustments | Non-linearity via neural networks | Non-linearity through ensemble networks | Non-linearity through neural networks | Non-linearity via kernel functions | Non-linearity through neural feature mappings | Non-linearity through neural networks | Non-linearity with neural regression | Adaptable with function class | Flexible handling of function classes including parametric, non-parametric, and neural networks. | Adaptable through weighted regression oracle |\\n| **Dynamic Environment Adaptability** | Attention-driven, adapts exploration based on recent arm performance | No specific mechanism for dynamic adaptation | Limited to ensemble refresh | Limited; mitigates forgetting with memory buffer | Limited; requires fixed kernel | Limited by fixed confidence bounds | Limited by fixed exploration parameters | Limited by fixed exploration parameters | Limited to regression oracle adjustments | Moderate; suitable for settings where function class adapts slowly. | Fixed first-order weighting |\\n| **Need for Hyperparameter Tuning** | Low, \\u03b1 adapts dynamically, reducing need for preset exploration tuning | Moderate, requires neural architecture tuning | Moderate, ensemble size tuning required | High, neural architecture and buffer size tuning required | High, kernel selection and tuning required | High, neural architecture and confidence tuning required | High, neural architecture and sampling tuning | High, neural architecture tuning | Low, regression oracle reduces need for additional tuning | Low | Low, regression oracle-based with minimal tuning |\\n| **Real-Time Exploration Control** | Yes | No | No | No | No | No | No | No | No | No | No |\"}", "{\"title\": \"Response to Reviewer 1LHD (Part 2/5)\", \"comment\": \"**Continue of Q2 (Specific Scenario):**\\n\\n- **Efficient Exploitation in a Stable Environment:** \\n When $\\\\alpha_t^a$ decays quickly, the algorithm is able to exploit stable reward patterns effectively, avoiding exploration of arms with similar expected rewards. By converging more rapidly towards exploitation, this approach maximizes cumulative rewards, which is optimal when rewards are relatively consistent across arms.\\n\\n$\\\\kappa$ adjusts the weight between global and local rewards in the attention mechanism:\\n- **Lower $\\\\kappa$ values** prioritize local attention, enhancing responsiveness to individual arm trends, which is useful in dynamic environments such as e-commerce where consumer preferences change rapidly.\\n- **Higher $\\\\kappa$ values** emphasize global attention, which is suitable for stable environments like financial markets where long-term trends prevail. \\n\\nFigure 6 shows that the model performs robustly across various $\\\\kappa$ settings, confirming its adaptability to different strategic emphases without compromising overall performance in later steps. This flexibility allows for effective application in both rapidly changing and stable environments, ensuring optimal performance tailored to the specific context.\\n\\n### Intuition\\nAs stated in lines 326-331, the $\\\\alpha_{N_t^a}$ adjusts the exploration rate through an attention mechanism that considers both temporal and spatial variations in data. This dual consideration allows for dynamic adjustment of exploration efforts based on time-dependent changes (**temporal**) and distinct reward patterns across different arms (**spatial**). By integrating these aspects, the mechanism enhances the model's adaptability to real-time changes, ensuring more effective exploration and exploitation. \\n\\nThis approach is particularly novel as it moves beyond static or merely context-aware adjustments seen in other models (lines 85-97). The robustness of this approach is confirmed in our empirical results (**Figure 2**, and **Figure 6**) and supported by **Theorem 2**, illustrating that our model maintains consistent performance across a range of $\\\\kappa$ and $\\\\alpha_0$ values, which marks a substantial advance over current models.\\n\\n**Q4 (Assumption 3):**\\n\\n**Confidence Ball: Assumption vs. Proposition**\\n\\n### Assumption 3 (Confidence in Parameter Estimation)\\n\\nAssumption 3 states that, for all time steps $t \\\\in [T]$ and arms $a \\\\in [A]$, the true parameter vector $(\\\\mu_a )^*$ lies within a confidence ball $\\\\text{BALL}_t^a$ centered around the estimated parameter $\\\\hat{\\\\mu}_t^a$. This assumption is a prerequisite for deriving high-probability confidence bounds and is defined in Definition 1 in Equation (17) of the paper. In which, $\\\\Sigma_t^a$ is the arm-specific covariance matrix and $\\\\beta_t^a$ is the confidence parameter that scales with the uncertainty in the measurements up to time $t$. The confidence ball incorporates both linear and nonlinear ($k$-NN) adjustments, as shown in Assumption 1 and Corollary 4 of the paper.\\n\\n### Proposition 1 (Uniform Confidence Bound)\\n\\nProposition 1 demonstrates the confidence ball property, ensuring it holds with high probability across all time steps $t$ and arms $a$:\\n\\n$$\\n\\\\Pr \\\\left( \\\\forall t, (\\\\mu^a )^* \\\\in \\\\text{BALL}_t^a \\\\right) \\\\geq 1 - \\\\delta\\n$$\\n\\nWe have provided the proof for this proposition in the paper, and it relies on self-normalized martingale inequalities, ridge regression guarantees, and the union bound, ensuring that Assumption 3 is mathematically valid and serves as the basis for our regret analysis (Proposition 2).\\n\\n### Implications for Exploration-Exploitation and Regret Bounds\\n\\nBy this assumption and Proposition, the true parameter vector $(\\\\mu^a )^*$ lies in $\\\\text{BALL}_t^a$ with high probability. This result supports:\\n\\n1. **Exploration Bonus (Lemma 2):**\", \"the_confidence_ball_defines_the_exploration_bonus\": \"$$\\n \\\\sqrt{\\\\beta_t^a} \\\\cdot \\\\sqrt{(X_t^a)^\\\\top (\\\\Sigma_t^a)^{-1} X_t^a}.\\n $$\\n\\n2. **Regret Bounds:**\", \"proposition_2_uses_the_confidence_ball_to_bound_regret\": \"$$\\n R_T \\\\leq \\\\sum_{t=1}^T \\\\sum_{a=1}^A \\\\Delta_t^a \\\\left( l_t^a \\\\left( (X_t^a )^* \\\\right) + f_{k,t}^a \\\\left( (X_t^a )^*, (Z_t^a )^* \\\\right) - l_t^a (X_t^a ) - f_{k,t}^a (X_t^a, Z_t^a ) \\\\right).\\n $$\\n\\n### Role of Assumption 3 in Bandit Literature\\n\\nThe use of confidence regions, such as $\\\\text{BALL}_t^a$, is a standard practice in contextual bandit literature:\\n\\n1. **Linear Bandits:** \\n In works like LinUCB, the assumption that the true parameter vector resides within a high-probability confidence region is central to the regret analysis, which allows exploration bonuses to be derived for UCB-based strategies and linked to regret bounds.\\n\\n2. **Nonlinear Bandits with Localized Models:** \\n Our Assumption 3 (Confidence in Parameter Estimation) is a standard prerequisite in contextual bandit literature, including models like $k$-NN UCB. Specifically, $k$-NN UCB defines confidence intervals through the uncertainty value $U_{t,k}^a (x)$.\"}", "{\"comment\": \"Dear Reviewer AgPg,\\n\\nWe would like to express our sincere gratitude for your valuable insights and suggestions on our work. We have tried our best to address the concerns and queries you raised during the rebuttal process. However, we would greatly appreciate knowing whether our response has effectively resolved your doubts. Your feedback will be instrumental in improving the quality of our work. As the end of the discussion period is approaching, we eagerly await your reply before the end.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"summary\": \"This work studies the contextual bandit problem with an introduction of k-NN designed to introduce nonlinearity and temporal dependency into the reward function. A set of theoretical analysis (regret upper bound) and experimental results is provided.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The idea of introducing k-NN's to leverage inner structures (i.e., similarity) between arm contexts is an interesting direction. I appreciate the author's effort in this to introduce additional nonlinearity into the system.\", \"The overall writing and presentation (not from a technical perspective) is satisfying.\", \"As I do have many confusions over the setting and problem itself, I would love to hear clarifications from the authors to further judge this work.\"], \"weaknesses\": \"I am currently holding many confusions over the setting of this work, and thus not readily at a stage to judge this work. I will take a deeper look into the technical contributions after I found myself understood the basics.\", \"major_questions\": \"1. The reward defined in Eqn. (1) is weird to me in the sense that as an expected reward, it would depend on historically pulled arms and randomly realized reward through the k-NN function. I have not seen similar formulations in the bandit studies, including the previous k-NN UCB paper (Reeve et al., 2018), where I think the k-NN is not a part of the expected reward.\\n\\n2. With that, is the $\\\\mu_t^a$ vector unknown while also time-varying in Eqn. (1) given the subscript $t$?\\n\\n3. Is the exploration-exploitation tradeoff discussed around Eqn. (2) a part of the formulation or algorithmic design?\\n\\n4. The optimal action can also be defined with more clarity. In particular, for each arm, Eqn. (3) says there is an optimal context; however, is the context generated by environment, or is the context (instead of the arm) that the player is selecting (if so, I do not see context selection in the algorithm)? Also, I found no definition of the decision space $D$.\\n\\n5. The regret definition in Eqn. (4) and its expansion in Eqn. (6) to connect with the single-step regret in Eqn. (5) is work worth debating: Eqn. (5) is measured with respect to the randomly realized reward $Y$, while Eqn. (5) is with respect to the expected rewards? Hopefully the authors can explain Eqn. (6) a bit better, especially clarify the notations.\\n\\n6. It seems that I found no description of the estimation of $\\\\mu_t^a$ anywhere in the algorithm?\\n\\n7. Section 3.2 seems to be about selecting a proper $k$ for k-NN; however, is k a parameter that given in the reward defintion?\\n\\n8. Also, I in general did not understand the purpose of Theorem 2, i.e., what is its statement?\", \"minor_questions\": \"9. The notations of $\\\\hat{Y}$ and $Y$ are used in a mixed way in Section 2.\", \"questions\": \"Please refer to weakness. I would love to re-examine this work with the questions addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer j6kr (Part 3/4)\", \"comment\": \"**Continue of Example in W4:**\\n\\n2. **Expected Reward Calculation for the Current Context:** \\n After determining the optimal $k$ for each arm (embedded in $(x_a^t)^*$), the model computes the expected reward for the actual context provided by the environment based on Equation (1) in the paper: \\n $$o^a_t(x^a_t, z^a_t) = l^a_t(x^a_t) + f_{k,t}^a(x_t^a, z_t^a) = \\\\mu_t^a \\\\cdot x_t^a + \\\\text{k-NN}_{k,t}^a(x_t^a, z_t^a).$$ \\n \\n This allows the model to calculate both the global reward trends ($\\\\mu_t^a \\\\cdot x_t^a$) and the local reward adjustments ($f_{k,t}^a(x_t^a, z_t^a)$) for each arm: \\n - **Sports:** $o_{\\\\text{Sports}}^t = 0.4$ \\n - **Politics:** $o_{\\\\text{Politics}}^t = 0.6$ \\n - **Entertainment:** $o_{\\\\text{Entertainment}}^t = 0.8$ \\n\\n3. **Action Selection:** \\n While the model computes the optimal context $(x_a^t)^*$ and evaluates the expected reward for each arm, only one arm is selected to play. In this case, Entertainment is selected because $o_{\\\\text{Entertainment}}^t = 0.8$ is the highest expected reward.\\n\\n4. **Reward Observation and Updates:** \\n After selecting Entertainment, the realized reward (e.g., click or no click) is observed. \\n The parameter $\\\\mu_a^t$ is updated via ridge regression, and the local adjustment $f_{k,t}^a(x_t^a, z_t^a)$ is refined by incorporating the new reward into the historical reward data. \\n\\n5. **Game Protocol:** \\n The optimal context $((x_a^t)^*)$ for each arm at time $t$ includes the dynamically adjusted $k$, ensuring the $k$-NN component provides the best local adjustment for that arm. \\n Even though the optimal context is computed for each arm, only one arm is selected to play, and only its reward contributes to learning at this step.\\n\\n\\n**W8 (Theorem 2):**\\n\\nIn Theorem 2, we illustrate how an attention mechanism can dynamically update the exploration parameter over time, effectively balancing global and local rewards information. Specifically, $\\\\alpha$ is updated based on number of arm selections, global attention (the global performance across all arms), and local attention (the performance of specific arm). \\n\\nAs more data is available, this dynamic adjustment enables the model to switch between exploration and exploitation. As a result of infrequent selection, the model explores more, and as its understanding of arms' performance improves, the model increasingly exploits this knowledge. We have demonstrated this approach both theoretically and experimentally, showing substantial improvements in bandit decision-making. An attention mechanism \\\"can be designed\\\" demonstrates one of the significant contributions of our study: the ability to integrate attention-based exploration rates into a wide variety of bandit models. As a result of this flexibility, models such as **Epsilon Greedy**, **BetaThompson**, and **LinThompson** are able to incorporate dynamic and adaptive exploration strategies, which significantly enhance their decision-making capabilities (lines 1385-1390). \\n\\nIn the paper, we introduce an adaptive weighting system for exploration rates based on attention mechanisms for each arm. A data-driven and context-sensitive exploration decision is ensured by attention-based adjustment, ensuring that the models are more adaptable. The results of this mechanism is shown in **Figure 5** and **Table 4**. The summary of the improvement can be found in the table below:\", \"table_2\": \"Effect of Attention-Based Exploration Rate on Other Bandit Models. Best mean reward (BMR), best cumulative reward (BCR), and the improvement percentage over the base model.\\n\\n| **Model** | **BMR** | **BCR** | **Imp. Over Base Model (%)** |\\n|-----------------------------|---------|---------|------------------------------|\\n| **BetaThompson-enhanced** | 0.79 | 632 | **259.09** |\\n| **Epsilon Greedy-enhanced** | 0.58 | 464 | **123.08** |\\n| **LinThompson-enhanced** | 0.69 | 552 | **64.29** |\\n\\n\\n\\nRegarding the intuition, as stated in lines 326-331, the $\\\\alpha_{N_t^a}$ adjusts the exploration rate through an attention mechanism that considers both **temporal** and **spatial** variations in data. This dual consideration allows for dynamic adjustment of exploration efforts based on time-dependent changes (**temporal**) and distinct reward patterns across different arms (**spatial**). By integrating these aspects, the mechanism enhances the model's adaptability to real-time changes, ensuring more effective exploration and exploitation. \\n\\nThis approach is particularly novel as it moves beyond static or merely context-aware adjustments seen in other models (lines 85-97). The robustness of this approach is confirmed in our empirical results (**Figure 2**, and **Figure 6**) and supported by **Theorem 2**, illustrating that our model maintains consistent performance across a range of $\\\\kappa$ and $\\\\alpha_0$ values, which marks a substantial advance over current models.\"}", "{\"comment\": \"We would like to thank the respected reviewer for their comments and time.\\n\\n### **Response to W1, W2, W7:**\\n\\nWe appreciate your question regarding $\\\\(k_t^a\\\\)$ and its role in the reward model. As mentioned before, $\\\\(k_t^a\\\\)$ is not pre-fixed or defined as a static sequence in the problem formulation. Instead, it is dynamically adjusted based on the reward history. This adaptivity is a central part of our approach, ensuring the model's flexibility in capturing local reward patterns effectively.\\n\\n**Yes**, the dynamic adjustment of $\\\\(k_t^a\\\\)$ has been explicitly highlighted throughout the paper, including in:\\n\\n- **Abstract:** Line 20 \\n- **Contribution:** Lines 100, 106\\u2013107, Table 1 \\n- ******Problem Definition:****** **Lines 173\\u2013176** (#**addressing your specific question**#) \\n- **Section 3.2:** Lines 289\\u2013290, Algorithm 2 (Line 5), Lines 308\\u2013311 \\n\\nWe hope this clarification resolves your concerns and highlights how $\\\\(k_t^a\\\\)$ is embedded as a dynamic and adaptive component of the algorithm, rather than being pre-fixed.\\n\\n---\\n\\n### **Response to W3:**\\n\\nRespectfully, we believe it is essential to include the exploration-exploitation trade-off in both Section 2 (Hybrid Contextual MAB Learning) and Section 3 (Methodology) to present our approach effectively. These sections have distinct roles and together provide a complete understanding of our model.\\n\\nSection 2 introduces the trade-off as a core aspect of the problem, highlighting our novel approach with the attention-based exploration parameter. This ensures readers understand that our method goes beyond standard exploration-exploitation strategies by dynamically adapting the exploration rate based on both global and local rewards. Without this inclusion, readers would not fully understand the problem formulation and could assume it aligns with standard bandit models, missing the distinction in how the exploration-exploitation trade-off is addressed.\\n\\n\\nSection 3 then details how this trade-off is implemented using temporal attention (e.g., Algorithm 3). This connection between problem definition and practical implementation ensures the novelty and adaptability of our method are clearly conveyed. Both sections are necessary to differentiate our work and provide a cohesive explanation.\\n\\n---\\n\\n### **Response to W4:**\\n\\n**$x_t$:** \\nEach context $x_t \\\\in \\\\mathcal{X}$ at time $t$ corresponds to a set of possible actions, or \\\"arms,\\\" indexed by $a$ within the set $\\\\mathcal{A} = \\\\{1, \\\\ldots, A\\\\}$, where $A$ is the total number of arms (**lines 158-160**). \\n\\n**$x_t^a$:** \\nRepresents the specific feature vector for arm $a$ at time $t$ (**line 172**). \\n\\nWhile $x_t$ originates as the same environmental input for all arms, it is utilized uniquely for each arm $a$, resulting in the arm-specific feature vector $x_t^a$ tailored to the decision-making process for that particular arm. \\n\\n**$(x_t^a)^\\\\*$:** \\nIs the optimal context for arm $a$, computed as the feature vector that maximizes the expected reward for arm $a$ at time $t$ (**line 206**). \\n\\nIt is derived from $x_t$ by dynamically adjusting the best available historical data through the adaptive $k$-NN mechanism (**lines 207-210**). \\n\\n\\n\\n### Key Differences:\\n- **Global vs. Arm-Specific:** \\n $x_t$ is global and shared across all arms, while $(x_t^a)^\\\\*$ is arm-specific and tailored to maximize the reward for arm $a$. \\n\\n- **Environment-Provided vs. Computed:** \\n $x_t$ is provided by the environment, whereas $(x_t^a)^\\\\*$ is computed by the model.\\n\\n- **Static vs. Dynamic:** \\n $x_t$ is static and unchanging during the decision process, while $(x_t^a)^\\\\*$ is dynamically adjusted for each arm based on its historical performance and context similarity. \\n\\n- **Purpose:** \\n $x_t$ defines the context for evaluating all arms, whereas $(x_t^a)^\\\\*$ identifies the best possible context for arm $a$, ensuring optimal reward prediction. \\n\\n\\n\\nWe hope this clarification has addressed your concerns effectively.\"}", "{\"title\": \"Response to Reviewer 1LHD (Part 3/5)\", \"comment\": \"**Continue of Q4 (Assumption 3):**\\n\\nThe uncertainty value implicitly bounds the true reward function $f_a (x)$ within: \\n\\n$$\\n\\\\hat{f}\\\\_{t,k}^a (x) \\\\pm U\\\\_{t,k}^a (x)\\n$$\\n\\n\\n\\n providing a confidence region analogous to our explicit definition of $\\\\text{BALL}_t^a$. \\n\\n This approach ensures exploration-driven optimism, and both frameworks rely on high-probability bounds for the true parameters. Also, in Neural UCB, a similar assumption has been used (Lemma 5.1).\\n\\n3. **Union Bound Over Time Steps:** \\n Many regret analyses use a union bound across time steps $t \\\\in [1,T]$ to ensure that the confidence intervals hold uniformly, as seen in \\\\citet{lattimore2020bandit}. The logarithmic terms in $\\\\beta_{t,a}$ are adjusted accordingly to account for the increasing number of time steps.\\n\\n4. **Connection to Self-Normalized Martingales:** \\n The self-normalized bound for the noise term $\\\\eta_{t,a}$ is a standard tool in contextual bandit literature. This ensures that even in stochastic settings, the assumption remains valid for bounding cumulative regret.\\n\\n\\n### Summary\\nIn summary, Assumption 3 is actually not a strong assumption but a standard prerequisite in bandit literature, consistent with both linear (e.g., LinUCB) and non-linear models (e.g., k-NN UCB). Proposition 1 validates this assumption, ensuring it holds uniformly with high probability. Together, they provide a robust foundation for our theoretical framework and regret analysis.\\n\\n\\n---\\n**Q3 (Exploration Rate, Arm Specific Scenario):**\\n\\n### Proof Overview\\nThe proof overview is intended to provide a high-level understanding of the process, helping the reader understand the main ideas and flow without being overwhelmed by technical details. It provides a general outline of the methodology, leaving the rest of the section for the detailed proofs. \\nRather than repeating technical details that have already been addressed later in the paper, the objective of the overview is to provide an intuitive explanation of how the proof proceeds. Appendix A contains all proofs, including all propositions, lemmas, and supporting arguments. But, we will add more explanation and a table for comprehensive list of notations and their representations based on your comment.\\n\\n### Role of $\\\\alpha_{N_t^a}$ \\nUsing this exploration parameter, the exploration-exploitation balance is dynamically adjusted by modulating confidence bounds by affecting the exploration bonus indicated in equation (7) of the manuscript. As a result of the exploration parameter in LNUCB-TA, it is possible to dynamically adjust the exploration-exploitation trade-off for each arm. The size of the confidence ball is affected by this adjustment, which indirectly affects the regret bounds. \\nIn contextual bandit proofs, it is standard practice for exploration parameters such as $\\\\alpha$ (in LinUCB) or $\\\\rho$ (in k-NN UCB) to indirectly affect theoretical guarantees. Instead of being explicitly expressed in regret bounds, their influence is encapsulated in terms such as confidence regions or covariance matrices.\\n- In LinUCB, the $\\\\alpha$ parameter adjusts the size of the uncertainty term within the confidence interval, but it does not appear explicitly within the regret bounds.\\n- $\\\\rho$ modulates uncertainty in k-NN UCBs based on neighborhood distances, although this contribution is similarly reflected in the confidence region. \\n\\nLNUCB-TA follows this standard. $\\\\alpha_{N_t^a}$, which combines global and local reward signals dynamically, influences the theoretical results through its impact on the confidence parameter. \\n\\nThe confidence region in LNUCB-TA is stated in Definition 1 in the paper, in which, $\\\\alpha_{N_t^a}$ indirectly influences $\\\\Sigma_t^a$ by determining the exploration bonus, which governs how frequently $x_t^a$ contributes to the update. Please kindly refer to Corollary 2, 3, and Lemma 5 for more details. \\n\\nAlso, the weight $\\\\kappa$ controls the relative importance of global and local attention in $\\\\alpha_{N_t^a}$. While critical in determining the exploration parameter, it does not explicitly appear in the proof as $\\\\kappa$ affects the magnitude of $\\\\alpha_{N_t^a}$, which is reflected in $\\\\Sigma_t^a$. This influence is absorbed into the confidence parameter $\\\\beta_t^a$, as discussed above. Also, the proof focuses on bounding regret by ensuring that $(\\\\mu^a)^* \\\\in \\\\text{BALL}\\\\_t^a$\\nfor all $t$ with high probability. While $\\\\kappa$ indirectly affects this containment through $\\\\alpha_{N_t^a}$, the final expressions are simplified to focus on the confidence bounds, which is the standard practice in literature.\\n\\n\\n### Distinct Setting: Per-Arm Covariance Matrices in Our Model\\nIn standard contextual bandit models (e.g., Lattimore & Szepesv\\u00e1ri, 2020), a global covariance matrix aggregates information across all arms:\\n\\n$$\\n\\\\Sigma_t = \\\\lambda I + \\\\sum_{i=1}^t x_i x_i^\\\\top\\n$$\\n\\nwhere $x_i$ represents the context vector of the selected arm at time $i$. \\nPlease see next part for the rest of response.\"}", "{\"comment\": \"Dear Reviewer 1LHD,\\n\\nThank you for your response and for considering our rebuttal in your evaluation. We would like to address your feedback regarding the length of our responses and explain the rationale for our approach, particularly as it pertains to the specific feedback from you and other reviewers.\\n\\n\\n\\nFor **Respected Reviewer AgPg**, their feedback included requests to compare our model with ten additional existing models and to integrate more literature. Naturally, this required us to provide a detailed and extensive response to address their concerns comprehensively.\\n\\n\\nFor **Respected Reviewer 7MDe**, they requested additional experiments and posed questions about the limitations and potential directions for future studies. This necessitated a detailed and careful explanation to ensure we addressed their valuable feedback appropriately.\\n\\n\\nFor **Respected Reviewer j6kr**, they raised nine distinct questions, of which they stated would directly impact their final evaluation after being addressed. Out of respect for their engagement and to provide sufficient clarity, we crafted detailed responses to each point to ensure no ambiguity remained.\\n\\n\\nFor **Respected Reviewer 1LHD (yourself)**, we aimed to address your detailed feedback thoroughly. Your questions on the mathematical formulations and requests for additional model comparisons required comprehensive responses to ensure clarity and completeness. The length of our responses reflected our commitment to providing the necessary details for an accurate evaluation.\\n\\nAdditionally, we would like to highlight that the inclusion of extensive mathematical formulas **(often taking 500 characters or more)** and the 5000 character limit per response naturally led to more segmented replies. We appreciate your engagement and effort in reviewing our work.\\n\\nIn summary, the length of our responses was not intended to overwhelm but to respect the depth and breadth of the feedback provided by each reviewer. We believe this level of detail was necessary to address the specific concerns raised. We greatly value your time and effort in reviewing our paper and hope this explanation clarifies our approach.\\n\\n\\n\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer j6kr (Part 2/4 )\", \"comment\": \"**W4 (Problem Setup):**\\n\\n### Context Generation and Selection:\\n\\nAs noted in line 171, the feature vector $X_t$ is drawn independently and identically distributed (i.i.d.) from a fixed marginal distribution over the context space $X$. This means that the context $X_t$ at each time step $t$ is generated by the environment according to the probability distribution, which is fixed but may vary depending on the specific application or problem at hand. \\nContext is not chosen by the agent at each time step but rather is provided by the environment. In the proposed model, the agent is required to select the arm that maximizes the expected reward based on the context, which is determined by the hybrid model of linear and nonlinear components in the algorithm. Even though the agent does not choose the context, it plays an important role in determining the reward structure for each arm, as well as informing the decision-making process.\\n\\n### Optimal Action and Decision Space $D$:\\n\\nIn Eqn. (3), the \\\"optimal context\\\" $(x_a^t)^*$ refers to the best possible context (in terms of maximizing expected reward) for each arm. This context incorporates both the global linear model and the local understanding provided by the $k$-NN term. Specifically, the optimal context is defined by the number of nearest neighbors $k$ in the $k$-NN function, which adds a local adjustment to the global linear prediction. This combination allows the model to account for both broader trends captured by the linear model and finer, local structures revealed by the $k$-NN adjustment. \\nThe decision space $D$ represents the set of all possible contexts that the model evaluates. This space is crucial because it facilitates a thorough exploration of potential scenarios. In the LNUCB-TA model, the \\\"arm-dependent optimal action\\\" refers to the best reward obtained for arm $a$ based on its history over $t$ steps, leading to the theoretical optimal action $\\\\pi_t^*$. As noted in lines 213-215, while we compute the optimal action for each arm, the model ultimately selects only one arm to play at each time step, choosing the arm with the highest expected reward. This ensures that the model is both locally optimal (for each arm) and globally optimal (across all arms) at that particular time step.\\n\\n### Game Protocol:\\n\\nAt each time step $t$, our model evaluates potential rewards for each arm based on its specific context $x_a^t$ and historical rewards $z_a^t$. While the model computes what the optimal action would be for each arm independently, only one arm is actually selected to play. This selection is made by choosing the arm that, according to the model's computations, offers the highest expected reward at that time. In this way, the decision-making process ensures that the model adapts to each arm's unique circumstances while still making a single, globally informed decision that optimizes the expected reward across all arms.\\n\\n### Simplified Example \\n\\nImagine a news recommender system where the goal is to recommend articles to users while maximizing user engagement (e.g., clicks). Here's how the proposed LNUCB-TA model operates step-by-step in this context:\\n\\n#### **Setting:**\\n- **Arms:** Categories of news articles (e.g., Sports, Politics, Entertainment). \\n- **Context ($X_t$):** User profile features at time $t$, such as age, location, device type, and browsing history. \\n- **Reward ($Y_t$):** User engagement (e.g., click or no click on the recommended article). \\n- **Decision Space ($D$):** The set of all possible contexts $X_t$ across all arms. \\n\\n#### **Example Scenario:**\\nAt time $t$, a user logs into the platform with the context vector: \\n$X_t = [\\\\text{age: 25, location: New York, device: mobile}]$. \\nThis context is provided by the environment, not chosen by the agent.\\n\\n1. **Optimal Action Computation for Each Arm:** \\n For each arm $a$ (e.g., Sports, Politics, Entertainment), the model evaluates the optimal context $(x_a^t)^*$ that maximizes the expected reward based on Equation (3) of the paper: \\n $$(x_t^a)^* \\\\in \\\\arg\\\\max_{x \\\\in D} \\\\left( (\\\\mu^a)^* \\\\cdot (x_t^a) + f_{k,t}^a(x_t^a, z_t^a) \\\\right)$$\\n Here, $(x_a^t)^*$ referes to the optimal context for arm $a$, which includes the optimal number of nearest neighbors $k$ used in the $k$-NN adjustment. The optimal $k$ dynamically adjusts based on the reward history ($z_a^t$) and the current context. \\n - For **Sports**, the optimal $k$ leading to optimal context for this arm may focus on smaller neighborhoods to reflect niche interests for Sports articles. \\n - For **Politics**, the optimal $k$ leading to optimal context for this arm may expand to include a broader historical dataset, capturing more generalized trends for this category. \\n - For **Entertainment**, the optimal $k$ leading to optimal context for this arm might fall somewhere in between, reflecting moderate local patterns. \\n\\nPlease kindly refer to next part for the continue of the example.\"}", "{\"comment\": \"Dear Reviewer 7MDe,\\n\\nWe noticed that you have reduced your score, and we would appreciate it if you could share the technical reasons behind this change.\\n\\nSincerely,\\nAuthors\"}", "{\"comment\": [\"Thank you for your response.\", \"I maintain my reservation about the assumption on the reward structure. If the authors claim such a reward structure models the real world data well, they need to present ample amount of empirical evaluation showing that. Since they specifically talk about non-linear and non-stationary reward functions, I would expect empirical evaluation on algorithms that cater to these scenarios. Note that I provided several references to non-stationary bandits that the authors ignored.\", \"Further, experiments section need to benchmark against neural bandit algorithms, non-stationary algorithms and simple combinations of these to provide convincing evidence that the propsed algorithm indeed solves a problem that current algorithms or simple combinations of them do not.\", \"Putting a giant table with superficial phrases is a bad way of comparing with existing literature (Table-1). Further I do not understand what the purpose of the Table-2 is. I am guessing it is to tell us that \\\"LNUCB-TA\\\" uses \\\"Attention Mechanism\\\". Without empirical evidence that such a modelling is required in real world data, I do not see why it is of any improtance.\", \"P.S. Authors can update the paper draft. It is more advisable to make modifications to the draft, and summarize responses with pointers to specific parts in the draft rather than dumping a huge pile of text here.\"]}", "{\"comment\": [\"I appreciate the authors running one of the necessary baselines. I would request them to upload their code for this specific comparison, along with reproducibility information as the supplementary file, to help me further evaluate the experiments.\", \"Regarding \\\"**It is worth noting that NeuralUCB, as demonstrated in its original paper, outperformed five other neural network-based methods.**\\\" Neural Thompson Sampling [6], Neural SquareCB/FastCB [7] have been shown to outperform Neural UCB on a number of tasks, and therefore are still missing from the baselines.\", \"Regarding \\\"**Finally, we would like to note that the bandit literature encompasses a vast number of models. While reviewers may suggest adding comparisons with additional models, it is neither practical nor standard for every paper to benchmark against all existing methods. Instead, comparisons are typically focused on models most relevant to the context of the presented work. In our case, while comparisons with neural-based models are informative, they are not the primary focus, as our approach fundamentally differs from neural-based methodologies.**\\\" : In this reviewers opinion, they are absolutely necessary baselines, since the authors have talked about nonlinear variation in the reward functions. Further since the authors say \\\"adapt to nonstationary environments\\\", non-stationary bandit algorithms are also relevant (which the authors keep ignoring).\", \"Again my primary objection is the assumption on the reward structure. One needs to motivate with several datasets, and existing baselines why the community should care about such an assumption.\", \"Regarding the tables - In this reviewers subjective opinion, Table-1 is exceedingly terse to infer much from, and Table-2 is unnecessary as one cannot infer why and if anyone needs \\\"Linear Modeling\\\" or \\\"Attention Mechanism\\\".\"]}", "{\"summary\": \"This paper studies a type of contextual bandit where the reward model is a mixture of linear and non-linear components. The non-linearity is captured by k-NN part, k-NN denotes the Nearest Neighbor. More formally, upon sampling an arm $a$ the observed reward is $\\\\hat{Y}\\\\_t^a=o\\\\_t^a\\\\left(x_t, z_t\\\\right)+\\\\xi_t^a$, where $o\\\\_t^a\\\\left(x_t, z_t\\\\right) = \\\\mu\\\\_t^a \\\\cdot x_t^a+\\\\mathrm{kNN}_{k, t}^a\\\\left(x_t^a, z_t^a\\\\right)$ and $z_t$ can be thought of as a nearest neighbor to $x_t$ which yields a similar reward in expectation to $x_t$. A motivating example that the paper gives is the traditional online recommendation system where the above linear component captures broad trends, such as higher click-through rates, while the adaptive k-NN component refines this by recognizing local patterns. They claim that the previous contextual bandit approaches cannot handle the mixture model (with the adaptive k-NN component) and so they propose a LNUCB-TA algorithm. The algorithm is mainly a LinUCB/UCB type algorithm where the confidence interval $U C B\\\\_t^a=\\\\left(\\\\alpha\\\\_{N_t^a}\\\\right) \\\\cdot \\\\sqrt{\\\\left(x_t^a\\\\right)^{\\\\top}\\\\left(\\\\Sigma\\\\_t^a\\\\right)^{-1} x\\\\_t^a}$ has an attention component $\\\\alpha\\\\_{N_t^a}$. The $\\\\alpha\\\\_{N_t^a} $ basically interpolates between the global reward average $g$ and number of times each arm is sampled (I will discuss this again). They provide a regret bound in Theorem 1, and show that LNUCB-TA regret scales as $R\\\\_T=\\\\tilde{\\\\mathcal{O}}(\\\\sqrt{d T})$. Finally, they empirically validate their result against several benchmarks in real datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper studies a new type of contextual bandit with mixture models. To my knowledge, this is somewhat novel.\\n2. The proposed UCB-type algorithm seems a valid approach. However, I have some concerns with the constants involved (to be discussed in the questions).\\n3. They theoretically analyze the algorithm and show it achieves sub-linear regret.\", \"weaknesses\": \"1. The paper needs to improve its writing significantly. The assumptions are mentioned in the Appendix. Please state them in the main paper, and a discussion on why they are important, The definition of BALL is mentioned in the appendix. I could not find the definition of the $\\\\beta_t$ as well. Please point to it. These things will make the paper more readable in the next version.\\n2. The motivation is not clear to me. This requires a more detailed explanation. See my question 1.\\n3. The attention of the UCB requires more explanation. It is not clear to me how tight it is. Also how it reflects on the regret bound. See my question 2.\\n4. There are many questions on the theoretical proof of Theorem 1 as well as the technical novelty. I have some doubts regarding the approach. See my question 3.\\n5. How some of the linear contextual baselines are implemented is not clear from the draft.\", \"questions\": \"1. Why an adaptive kNN component is needed for linear contextual settings? The paper starts with a statement (line 75-76) \\\"Despite advancements in MAB algorithms, existing algorithms predominantly fail to incorporate adaptive strategies for reward estimation as a function of the context.\\\" I am not sure I fully follow this. There are papers on kernel-UCB which uses reward estimation as a function of context \\\"Finite-Time Analysis of Kernelised Contextual Bandits, Valko et al, 2013\\\" or Neural UCB papers, or Collaborative Neural UCB papers that extend this idea to non-linear contextual settings, What is k-NN specifically bringing to the table when such models exist?\\n2. The attention is basically defined as $\\\\alpha_{N_t^a}= \\\\frac{\\\\alpha\\\\_0}{\\\\left(N_t^a+1\\\\right)} \\\\cdot\\\\left(\\\\kappa g+(1-\\\\kappa) n\\\\_t^a\\\\right)$. Consider $\\\\kappa = 1/2$, $\\\\alpha\\\\_0 = 2$, assume the average of global rewards $g \\\\approx n_t^a$, then $\\\\alpha_{N_t^a} \\\\propto 1/\\\\left(N_t^a+1\\\\right)$. This is a very fast decay of UCB. So it is not clear to me how this is actually helping the exploration. Also not clear to me how you set $\\\\kappa$.\\n3. In Theorem 1, it is not clear to me how the $\\\\alpha\\\\_{N\\\\_t^a}$ is showing up in the proof. I suggest you write a more in-depth proof overview as the current proof overview from lines 376-386 is insufficient. How does the $\\\\kappa$ not appear in the proof? In your Corollary 1, you state that $\\\\Sigma\\\\_t^a=$ $\\\\left(X\\\\_t^a\\\\right)^T X\\\\_t^a+\\\\lambda I$ is the covariance matrix (Lattimore \\\\& Szepesv\\u00e1ri, 2020, equation 20.1) updated for arm $a$. However, in the book, the co-variance matrix is defined over all arms selected till time $t$. This is crucial as the co-variance matrix captures the information gathered from all arms, and is crucial to drive the informative sampling.\\n4. Assumption 3 in Appendix lines 873 seems like a very strong assumption. It states for all time steps t and for each arm, a, true parameter vector $\\\\mu^*$ resides within a confidence ball centered around the estimated parameter $\\\\mu_t^a$. This confidence ball is denoted as $\\\\mathrm{BALL}_{(t, a)}$. This should be rigorously proved instead of taking an assumption. Can you clarify this?\\n5. It would be also great if the authors explain in detail how the benchmarks LinUCB, LinTS are implemented for this setting. I also feel that more baslines like Kernel-UCB or Neural UCB should be used as a baseline to show that they fail in this setting.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer AgPg (Part 2/4)\", \"comment\": \"**Continue of Q2 (Difference with k-NN UCB):**\\n\\nIn terms of computational efficiency, **Table 1** (as shown in the column on k selection method, the proposed k-NN is adaptive and the k is selected **non-parametrically based on the variance of rewards**, whereas the k-NN UCB is based on function optimization), **Table 2** (a difference in execution **run times** is shown in the column), and Figure 8 (the comparison of our model against the simple combination model, k-NN UCB + LinUCB, regarding **runtime and scalability**). Also, as stated in lines 320 to 323, unlike existing nonlinear approaches that involve a search over k, preceding time steps k \\u2208 [1, t\\u22121] (Park et al., 2014; Reeve et al., 2018), our proposed model utilizes a data driven approach for selecting k, achieving **less time complexity** and significantly decreasing time complexity compared to the **function optimization** techniques used in k-NN UCB and k-NN KL-UCB).\\n\\nFor **adaptability** in dynamic environments, we have implemented k-NN adjustments based on reward history, which enables our model to respond flexibly to changing conditions over time (lines 312-320). In this section, specific scenarios with high and low rewards have been discussed. Additionally, our attention-based exploration parameter, described in **Section 3.3**, dynamically balances exploration and exploitation in real time (lines 25-28, table 1, lines 85-97, lines 191-201, etc.). As a result of this continuous adjustment, based on recent arm performance, our model becomes **more adaptable** to evolving reward structures, which addresses the limitations of static and fixed-rate approaches (lines 326-331). Furthermore, you can see in **Figures 3 and 4** the comparison between k-NN UCB and our model at different exploration rates, and thanks to the attention-based exploration rate proposed in our model, we are able to provide **more consistent results**, showing that our model is more responsive to changes in reward, also providing a better estimation with a higher mean and cumulative reward.\"}", "{\"comment\": \"Thank you for your continued feedback and for raising these points.\\n\\n**Benchmark and reward structure**\\n\\nWe would like to highlight that we have already presented comparison **against 14 different models on three datasets**, demonstrating the robustness of our model across diverse scenarios. Moreover, **an additional dataset** was incorporated based on Reviewer 7MDe's comments, further extending our evaluations.\\n\\nHowever, as requested, we have now provided results for **KernelUCB and NeuralUCB**. **It is worth noting that NeuralUCB, as demonstrated in its original paper, outperformed five other neural network-based methods. Notably, our model outperforms NeuralUCB in the two datasets analyzed, further showcasing its effectiveness and adaptability.**\\n\\n### Performance Comparison Table On News Recommendation Dataset\\n| **Model** | **Exploration Rate (\\u03b1)** | **Cumulative Reward** | **Mean Reward** |\\n|-----------------|--------------------------|------------------------|-----------------|\\n| **LNUCB-TA** | 0.01 | **752** | **0.94** |\\n| **LNUCB-TA** | 0.1 | **741** | **0.93** |\\n| **LNUCB-TA** | 1 | **752** | **0.94** |\\n| Neural UCB | 0.01 | 726 | 0.90 |\\n| Neural UCB | 0.1 | 717 | 0.89 |\\n| Neural UCB | 1 | 722 | 0.90 |\\n| Kernel UCB | 0.01 | 479 | 0.60 |\\n| Kernel UCB | 0.1 | 414 | 0.52 |\\n| Kernel UCB | 1 | 446 | 0.56 |\\n\\nBased on the table above, our proposed model outperforms the two other models. As Neural UCB performs well in this dataset, we extended the analysis to compare it with our model on the AstroPh co-authorship network in terms of cumulative reward (y-axis) observed at 5% (Figure 7). Neural UCB achieves a cumulative reward of 8636 observed nodes, which is **approximately 12% lower** than the **9808** achieved by our model (LNUCB-TA). Kernel UCB, on the same dataset, achieves 9332 observed nodes, which is **4.85% lower than our model**. Moreover, even our proposed component combined with Epsilon Greedy (green line in Figure 7) surpasses Neural UCB with 8917 observed nodes, **approximately 3.5% higher** than Neural UCB. We will provide the updated Figure 7 in our revised manuscript by adding these two models to the Figure.\\n\\nInterestingly, in the news recommendation dataset, Neural UCB outperformed Kernel UCB, whereas in the AstroPh co-authorship network, Kernel UCB showed better performance than Neural UCB. Despite these variations, **our model outperformed both methods across both datasets**. \\n\\nFinally, we would like to note that the bandit literature encompasses a vast number of models. While reviewers may suggest adding comparisons with additional models, it is neither practical nor standard for every paper to benchmark against all existing methods. Instead, comparisons are typically focused on models most relevant to the context of the presented work. In our case, while comparisons with neural-based models are informative, they are not the primary focus, as our approach fundamentally differs from neural-based methodologies.\\n\\n---\\n\\n### **Table 1 and 2**\\n\\n\\nTable 1 serves to systematically position our model relative to existing approaches by highlighting key methodological differences (the respected reviewer has specifically asked to **discuss this series of work** in your W2-4).\\nTable 2, in particular, underscores our model\\u2019s ability to incorporate both global trends and local refinements, while introducing attention for dynamic exploration. This unique synthesis is validated empirically through extensive results across multiple datasets (see Table 2, Figure 2, Figure 5, etc.), demonstrating the tangible benefits of combining these components. \\n\\nWe would like to respectfully note that the length of our responses was driven by your feedback. Specifically, the respected reviewer had requested **discussions on ten additional papers**. While the responses were detailed, they were structured and targeted to address the points raised. Striking the right balance between brevity and clarity is challenging in such cases, but our intention was always to provide clarity rather than overwhelm. It is, however, surprising that the reviewer focused solely on comparisons, seemingly overlooking our contributions, theoretical proofs, and other significant aspects of the work.\\n\\n\\nWe appreciate the suggestion to revise the draft directly. However, as we can submit only one updated PDF, we plan to provide a revised draft after receiving feedback from all reviewers. This ensures a unified and comprehensive update.\"}", "{\"summary\": \"This work studies the problem of contextual bandits and develops a novel UCB style algorithm leveraging linear relationship between contextual features and reward for each arm along with non-linear estimation with k-NN . The developed algorithm addresses some important limitations found in existing algorithms like LinUCB that relies on linear relationship between the context in the reward formulation. By introducing an adaptive k-Nearest Neighbors that adjust with reward variance LNUBC TA encompasses the non-linearity that is computationally not explosive. Along with dynamic exploration term, LNUCB seem to perform much better than available state of the art algorithms in contextual MABs. The authors also provided theoretical regret guarantees which achieve sub-linear regret bounds.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis work addresses the issue of computational efficiency when dealing with non-linearity in the contextual features which is an essentially an important component in the usage of these algorithms in real-world scenarios. Generally, most of the existing approaches to solve the non-linear relationship between contextual features and reward including KNN-UCB are computationally expensive and LNUCB-TA tend to solve the time complexity issues commonly associated with nonlinear models\\n\\n2.\\tThe work presents a solid theoretical guarantee in the form of regret to match the theoretical performance of the existing algorithm with its sub-linear regret bound. Thus, showcasing its solid performance.\\n\\n3.\\tThe work also presents a strong empirical result in various regimes with multiple datasets to showcase its performance in real-world scenario.\\n\\n4.\\tThe authors also provide a detail regarding the criteria\\u2019s for k-selection that is intuitively based on the variance in rewards for each arm at time $t$ thereby solving the previous gaps found in KNN-UCB and other algorithms.\", \"weaknesses\": \"1.\\tThough the problem setting is interesting, the paper studies the contextual MAB problem, it is an incremental work extending the basis of LinUCB and KNN UCB with the inclusion of adaptive KNN as the non-linear factor in the reward estimation.\\n\\n2.\\tThe work doesn\\u2019t include any results corresponding to the sample complexity of this algorithm to understand the sampling regiment. \\n\\n3.\\tThe work also doesn\\u2019t include any detailed regret comparison to other algorithms that exist in the contextual Multi armed bandits problem space.\", \"questions\": \"1.\\tThe concept of UCB itself involves considering the unit reward (total reward/ no of selection). What additional information does temporal attention brings to the estimated quantity in decision making and how do they differ? This can help understand the temporal attention term better.\\n\\n2.\\tThe work compares its empirical performance to LinUCB, LinUCB with KNN etc., however a detailed comparison of LNUBC TA \\u2018s theoretical regret bound guarantees with the aforementioned algorithm is not discussed. Having those details can help better understand the theoretical performance guarantees.\\n\\n3.\\tThe empirical evaluation focuses mainly on recommendation systems and network exploration. How would the model perform in other domains with different reward structures, such as healthcare or finance?\\n\\n4.\\tAlso, the algorithm seems to rely heavily on the fixed weight between linear and nonlinear components in the measuring quantity. Does this limit the performance of the algorithm in specific regimes? If so, a detailed discussion on it will help bring more clarity to those. Also, would it be possible to make the weight some form of a tunable parameter? \\n\\n5.\\tK-NN relies on the dimensionality of the context features. What happens when the contextual features are large enough that it suffers from large dimensional space. There seem to be no sufficient details on the potential limitations in high-dimensional feature spaces. Could you include more details about this?\\n\\n\\n6.\\t$f_a$ uses the observed rewards from $za_t$ for closest neighbors a_{k,t} in terms of context similarity based on Euclidian distance. Why was Euclidian distance metric chosen?\\n\\n7.\\tAlso, how does it react when there are many irrelevant features in high-dimensional context as it uses Euclidian ball?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer 1LHD,\\n\\nWe would like to express our sincere gratitude for your valuable insights and suggestions on our work. We have tried our best to address the concerns and queries you raised during the rebuttal process. However, we would greatly appreciate knowing whether our response has effectively resolved your doubts. Your feedback will be instrumental in improving the quality of our work. As the end of the discussion period is approaching, we eagerly await your reply before the end.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer AgPg (Part 1/4)\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback and insightful comments. Please kindly find our response to address the weaknesses (W), and Questions (Q).\\n\\n**W1 & Q1 (Reward Structure):**\\n\\nWe appreciate your insightful comment regarding incorporating k-NN into the expected reward. We recognize that this may seem unconventional as opposed to traditional contextual multi-armed bandit (MAB) models. For clarification, we have incorporated the **history of rewards** into the expected reward function. A nonlinear adjustment based on the average reward of a set of nearest neighbors is assumed to be added to the linear component of the true expected reward. As a result of this structure, **global trends**(through the linear term) and **local behavior** can be modeled. Our assumption enables us to capture reward dynamics adaptively in environments where reward structures vary across contexts, particularly nonstationary environments.\", \"it_is_suggested_that_this_formulation_be_considered_a_hybrid_approach\": \"a linear setting enhanced by **a refinement using an adaptive k-NN**, based on reward history. Primary linear components capture the global relationship between context and reward, whereas k-NN adjustments provide a context-specific, localized correction. Based on historical reward data, this refinement enables the model to dynamically address nonlinear variations without fundamentally deviating from standard contextual bandit formulations. Taking this perspective into account, k-NN inclusion is not unusual, but rather **an extension** of the model that enhances its adaptability and precision in complex, evolving environments.\\nWe understand that Reeve et al. (2018) do not assume such a decomposition and instead rely on Lipschitz and Marginal assumptions (Assumptions 1 and 3 in their paper) to guide the reward estimation process. Their approach focuses on ensuring that the reward function behaves smoothly with respect to the context. In addition, there is a certain separation between rewards for different arms, which supports efficient exploration and exploitation. While their assumptions are effective for specific types of reward structures, our approach takes **a more flexible** route by incorporating historical rewards into the expected reward formulation, allowing for a richer model that can handle more complex reward structures.\\n\\nWhen classical bandit algorithms are implemented, the expected reward is generally determined by the context at the current time step. Once the context has been provided, it is assumed that the reward will be independent of past rewards. Our approach, on the other hand, **modifies the expected reward based on past rewards**. In our model, this assumption is primarily introduced to incorporate the history of rewards into the reward estimation process for each arm so that local reward patterns can be better captured. An adaptive k-NN term is used in the model to consider the history of actions in similar contexts in order to capture the impact of past rewards, which is not directly addressed by classical models. As a result, the model is capable of adapting its reward estimates in light of historical information, thereby providing a more comprehensive and flexible picture of rewards.\\n\\nThe main intuition behind our approach is reward dynamics are nonstationary, in many real-world environments, which means that reward structures can evolve over time, and **past rewards can continue to impact future decisions**. Accordingly, the k-NN term allows the model to capture the impacts of past rewards by considering the history of actions in similar contexts, something that is not considered directly in classical models.\\n\\nThe inclusion of k-NN directly in the expected reward in bandit models is less common, but we believe that this formulation significantly strengthens the model's ability to learn and **adapt to nonstationary environments** where previous actions and rewards have lasting effects.\\n\\n**Q2 (Difference with k-NN UCB):**\\n\\nWe would like to kindly highlight that throughout the paper, particularly in **Sections 3.1, 3.2, and 3.3**, we have made an effort to provide a clear explanation of how our model relates to and differs from the k-Nearest Neighbor UCB (Reeve et al., 2018). We propose a unique synthesis of linear and nonlinear components, which captures both global trends by a linear model as well as localized adjustments by a k-NN-based nonlinear model (lines 17-20). In combination, our model can handle both broad and local variations in reward structures more effectively than traditional k-NN based approaches, which might only provide localized adaptations (lines 80-84).\"}", "{\"title\": \"Response to Reviewer 1LHD (Part 4/5)\", \"comment\": \"**Continue of Q3 (Exploration Rate, Arm Specific Scenario):**\\n\\nThis global matrix is appropriate when all arms share a single global parameter vector $\\\\mu$, and the reward for any arm $a$ is modeled as:\\n\\n$$\\nr_t^a = \\\\langle x_t^a, \\\\mu \\\\rangle + \\\\eta_t^a,\\n$$\\n\\nwhere $\\\\mu$ is the shared parameter vector.\\n\\n\\n### Our Setting: Hybrid Reward Model and Arm-Specific Covariance Matrices\\nIn our hybrid contextual bandit model, each arm $a$ is associated with its own parameter vector $\\\\mu^a$, and the observed reward for arm $a$ at time $t$ is modeled as:\\n\\n$$\\\\hat{Y}_t^a = \\\\mu_t^a \\\\cdot x_t^a + \\\\text{k-NN}\\\\_{k,t}^a(x_t^a, z_t^a) + \\\\xi_t^a$$\\n\\n\\nas detailed in Equation (1) of the manuscript. Here:\\n- $\\\\mu_t^a \\\\cdot x_t^a$ is the linear reward component for arm $a$,\\n- $\\\\text{k-NN}_{k,t}^a(x_t^a, z_t^a)$ is the non-linear adjustment using k-nearest neighbors,\\n- $\\\\xi_t^a$ is the noise term.\\n\\nThis distinct parameterization necessitates per-arm covariance matrices (stated in Corollary 1), defined as:\\n\\n$$\\n\\\\Sigma_t^a = (X_t^a)^\\\\top X_t^a + \\\\lambda I,\\n$$\", \"where\": \"- $X_t^a = [x_{a,1}, x_{a,2}, \\\\ldots, x_{a,N_t^a}]^\\\\top$ contains all context vectors observed for arm $a$ up to time $t$,\\n- $N_t^a$ is the number of times arm $a$ has been selected up to time $t$. \\n\\nThe per-arm structure ensures that exploration and uncertainty for each arm are driven solely by its own historical data, as outlined in Equation (3) of the manuscript. \\n\\nRather than relying on a global structure for exploration decisions, this formulation ensures that exploration decisions are driven by the most relevant context and reward history for each arm. This is one of the main aspects of our proposed model, followed in all the mathematical settings of the paper. Our hybrid model integrates global trends ($\\\\mu_t^a \\\\cdot x_t^a$) and local adjustments ($\\\\text{k-NN}_{k,t}^a(x_t^a, z_t^a)$), necessitating a localized uncertainty measure. The per-arm covariance matrix naturally aligns with this requirement, enabling efficient and adaptive exploration in reward settings.\\n\\n**Q5 (Benchmarks):**\\n\\n### As benchmarks, we selected LinUCB, LinThompson, k-NN UCB, and k-NN KL-UCB in order to highlight the limitations of models that rely exclusively on linear or nonlinear approaches and to motivate our unique synthesis of linear and nonlinear components.\\n\\n\\n#### Why LinUCB and LinThompson?\\n\\n- **Purpose:** The purpose of these models is to capture global trends through linear modeling of context-reward relationships.\\n- **Relevance to Our Setting:** By incorporating LinUCB and LinThompson as baselines, we demonstrate how linear models have difficulty adapting to non-linear variations in reward structures. In particular, these benchmarks demonstrate the limitations of linear-only approaches in capturing local patterns and context-sensitive adjustments, which are critical in dynamic environments.\\n\\n### Implementation in Our Setting:\\n\\n- **LinUCB:**\", \"the_reward_expectation_is_correctly_formulated_as\": \"$$\\n E[r_{t,a} \\\\mid x_{t,a}] = \\\\langle x_{t,a}, \\\\mu_a^* \\\\rangle,\\n $$ \\n where $x_{t,a} \\\\in \\\\mathbb{R}^d$ is the context vector and $\\\\mu_a^* \\\\in \\\\mathbb{R}^d$ is the true parameter vector. \\n **Exploration Mechanism:** The exploration-exploitation trade-off is achieved using the classical upper confidence bound (UCB).\\n\\n- **LinTS:**\", \"the_reward_expectation_for_lints_is_also_correctly_expressed_as\": \"$$\\nE[r_{t,a} \\\\mid x_{t,a}] = \\\\langle x_{t,a}, \\\\mu_a^* \\\\rangle.\\n$$\", \"the_posterior_sampling_formulation_is_consistent_with_the_standard_thompson_sampling_framework\": \"$$\\\\tilde{\\\\mu}\\\\_{t,a} \\\\sim \\\\mathcal{N}(\\\\hat{\\\\mu}\\\\_{t,a}, \\\\Sigma\\\\_{t,a}),$$\\n\\n\\n where $\\\\tilde{\\\\mu}\\\\_{t,a}$ is a sampled parameter vector from a multivariate Gaussian distribution with mean $\\\\hat{\\\\mu}\\\\_{t,a}$ and covariance matrix $\\\\Sigma_{t,a}$. \\n\\nBoth LinUCB and LinTS assume a purely linear relationship between the context and reward, relying on a linear model ($\\\\langle x_{t,a}, \\\\mu_a^* \\\\rangle$) to predict rewards. While these methods perform well in environments where the reward structure follows global linear trends, they fail to capture local variations or nonlinear dependencies in the reward function.\\n\\nIn contrast, our proposed model builds upon these frameworks by integrating a novel k-NN-based refinement to capture local variations and non-linear dependencies in the reward structure, adapting to context-sensitive patterns. Additionally, it incorporates an attention mechanism that dynamically adjusts the exploration parameter in real time, effectively balancing exploration and exploitation based on reward history without the need for pre-tuned parameters. This unique synthesis of a global linear component, local non-linear adjustment, and adaptive exploration ensures the model overcomes the limitations of purely linear methods while maintaining computational efficiency and adaptability in dynamic environments.\\n\\nPlease see the next part for the rest of the response.\"}", "{\"title\": \"Response to Reviewer 7MDe (Part 3/3)\", \"comment\": \"**Q5 (High-dimensional Context):**\\n\\nThanks to the reviewer for his insight into the impact of high-dimensional feature spaces on k-NN within the LNUCB-TA model. Indeed, high dimensionality can pose challenges, including reduced discriminative power in distance metrics and potential computational costs, especially in real-time applications. The current model focuses primarily on adaptability and performance in various contextual environments, however, we acknowledge that dimensionality can have a significant impact on the efficiency and accuracy of the k-NN component. In spite of the inherent limitations of k-NN in high-dimensional contexts, our approach incorporates both linear and nonlinear approaches to mitigate these limitations. In our hybrid model, the linear component provides a complementary global estimation, which reduces the sole dependence on the k-NN term for predicting rewards. By doing so, the model can still take advantage of linear trends in high-dimensional situations while adjusting locally to the greatest extent possible with the k-NN term.\\n\\nWe envision multiple enhancements to LNUCB-TA in the future that would maintain or enhance its performance in high-dimensional scenarios. These enhancements include: \\n1. **Dimensionality Reduction Techniques:** Principal Component Analysis can be used to reduce the dimensionality of the contextual feature space, which will enhance the k-NN component's ability to capture local patterns without compromising computational efficiency. \\n2. **Feature Selection and Metric Learning:** It is possible for LNUCB-TA to prioritize the most relevant features or learn the optimal distance metric for k-NN by integrating feature selection or metric learning techniques. By employing these approaches, k-NN would be able to select neighborhoods in a meaningful manner, even in high-dimensional environments. \\n3. **Adaptive Dimensionality Based on Context:** As another promising approach, we can adjust the dimensionality of the k-NN model dynamically based on the variance or contextual structure, ensuring that the model is not overly sensitive to dimensions that are not critical for predicting rewards. \\n\\nWe evaluated the performance of LNUCB-TA on real-world datasets in the current study, demonstrating that these high-dimensional limitations had no negative impact on performance. Our analysis indicates, however, that these enhancements may further enhance the model's applicability in high-dimensional settings and consider this to be an exciting direction for future research.\\n\\n**Q6 & 7 (Euclidian Distance):**\\n\\n### 1. Context Similarity\\n When features are scaled appropriately, Euclidean distance can capture the similarity between contexts, making it an effective way of identifying relevant neighbors.\\n\\n### 2. Computational Efficiency \\nEuclidean distance is a computationally simple method of determining the distance between context features. Calculating the distance involves only basic arithmetic operations (squared differences and summation) for each pair. In cases where context vectors are relatively low to moderately dimensional, Euclidean distance is efficient due to the low computational complexity.\\n\\n### 3. Local Pattern Sensitivity and Distance-Based Weighting\\nIt provides us with a means of defining a neighborhood of k-nearest contexts to ${X}_t$, and then calculating the adjustment term from the observed rewards from these neighbors. By averaging over nearby rewards, Euclidean distance ensures that contexts are included with minimal squared differences from ${X}_t$, capturing local reward patterns effectively.\\n\\n### 4. Potential Extensions\\nIf the feature space exhibits directional or correlation-based dependencies, we could consider alternative metrics such as the Mahalanobis distance. However, it is computationally more expensive (due to the inverse covariance calculation) and may not be as efficient for real-time applications.\\n\\nWe selected the Euclidean distance metric here for its simplicity, computational efficiency, and effectiveness in capturing local similarities when features are homogeneously scaled. As our k-NN component is designed to capture localized reward patterns based on Euclidean proximity, it is suitable for dynamic environments in which real-time computation and adaptability are essential.\\n\\n### Impact of High-Dimensional Settings\\nAs our model captures global trends through the linear component, our k-NN adjustment is more of a local refinement than a primary model for estimating rewards. So, the k-NN component only refines the overall reward estimate even in high-dimensional settings, where Euclidean distance might be affected by irrelevant features. The impact is less pronounced. This refinement is intended to capture context-specific variations that the global linear model may miss. Due to the additional localized information this component provides, its impact is limited if high-dimensional irrelevant features slightly alter neighbor selection.\"}", "{\"title\": \"Response to Reviewer j6kr (Part 1/4 )\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback and insightful comments. Please kindly find our response to address your Questions.\\n\\n**W1 (Reward Structure):**\\n\\nPlease kindly refer to our response to W1 & Q1 of Reviewer 1 (Reviewer AgPg).\\n\\n**W2 ($\\\\mu_t^a$):**\\n\\nIndeed, $\\\\mu_t^a$ represents the coefficients associated with the expected reward model for arm $a$ at time $t$. As new data (contexts and rewards) are observed, this vector is both unknown and time-varying. With the increasing collection of data, the model updates $\\\\mu_t^a$ to reflect changes in reward structure, thereby capturing changes in the relationship between context and reward over time. \\nContextual MAB algorithms are characterized by this time-varying characteristic. As more context-reward pairs are observed, the model is able to refine its prediction of the expected reward for each arm by continually refining the value of $\\\\mu_t^a$. In line with standard MAB models, where new observations are made to refine the model's understanding of the underlying reward function, this approach is consistent with the model's iterative updating of the expected reward. \\nAs stated in lines 779-795, a ridge regression model is used to estimate the reward model, which forms the basis of the update process for $\\\\mu_t^a$. As a result of the solution to this regression problem, we get a valid estimate of the true $\\\\mu_t^a$. We have provided details on this process and the associated uncertainty region in Corollary 1 of the Appendix. As more data is accumulated, the model refines its estimate of $\\\\mu_t^a$, which represents the relationship between the context and the expected reward for arm $a$. The uncertainty region quantifies the confidence in this estimate, and as the number of observations $N_t^a$ increases, the size of this uncertainty region decreases. This reflects a growing certainty in the model's expected reward predictions, driven by the additional information provided by the accumulated data. \\nThis formulation captures the \\\"optimism in the face of uncertainty\\\" principle, central to our approach. As the model accumulates more data, the confidence in the estimate $\\\\hat{\\\\mu}_a^t$ improves, narrowing the uncertainty region and enhancing the decision-making process. Therefore, the time-varying nature of $\\\\mu_t^a$ reflects both the iterative learning process and the evolving confidence captured by the uncertainty region, which is an important component of exploration and exploitation. By updating the model over time, the model is able to adapt to evolving reward structures.\\n\\n**W3 (Exploration-exploitation Tradeoff ):**\\n\\nAs mentioned in Section 3.1 (Overall concept), the attention-based exploration-exploitation is one of the main novel proposed components. In LNUCB-TA, the formula in Eqn. (2) is a central component of the algorithmic design, as it dynamically adjusts the exploration parameter. This formulation is intended to provide real-time adjustments to the exploration-exploitation balance based on both the global performance of all arms and the local reward patterns of each arm (lines 22-28, 102-105, 124-129). \\nAs stated in lines 85-97, in traditional MAB models, the exploration-exploitation trade-off is often managed by a fixed or manually tuned parameter. However, our approach introduces an adaptive strategy that continuously adapts as a result of observed performance. Due to the dynamic adjustment mechanism, manual fine-tuning of exploration parameters is eliminated. Instead, the model automatically adapts the exploration-exploitation balance in real time, thus responding effectively to changes in reward patterns without extensive parameter tuning or prior knowledge of the optimal exploration rate. Consequently, a more efficient and adaptive learning process is ensured. Thus, exploration-exploitation trade-offs are integral to the LNUCB-TA algorithm's design and operation. \\nIn Section 3.3, which provides further clarification on how the exploration-exploitation balance is handled, we have provided a more detailed explanation of this dynamic adjustment and its role in the overall algorithmic framework.\\n\\n**W6 (Estimation of $\\\\mu_t^a$):**\\n\\nPlease kindly refer to Corollary 1 and Definition1 in the Appendix.\\n\\n**W7 (Section 3.2):**\\n\\nIn LNUCB-TA, k is not a fixed parameter but is dynamically determined for each arm during each time step. Unlike traditional approaches where k may be predefined, static, or chosen based on function optimization, our method dynamically adjusts k at **each time step**, adjusting it to **the specific arm** and current observations. This leads to **less time complexity** compared to existing methods (lines 320-323). Also, in this section, specific scenarios with high and low rewards have been discussed (lines 312-319). As a result of this process, the model is able to adjust its estimations in real-time as local reward variations are incorporated, which constitutes a significant improvement over existing methods.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We have decided to withdraw our submission, but we feel it is important to address some critical points for transparency and to defend our work, given that all comments will be made public.\\n\\n---\\n\\n## Reviewer 1LHD\\n\\nWe respectfully note that Reviewer 1LHD's feedback demonstrates significant misunderstandings of several critical aspects of our submission:\\n\\n1. **Misrepresentation of Our Work as a Mixture Model** \\n Surprisingly, the reviewer misinterpreted our model as a mixture model, which fundamentally differs from the contextual bandit framework we presented. Despite detailed clarifications highlighting this distinction, the misunderstanding persisted.\\n\\n2. **Misunderstanding of \\u03b2 as a Standard Notation** \\n The reviewer claimed \\u03b2 was undefined, even though it was explicitly defined in Equation (8) in the main text and elaborated in Definition 1, Corollary 3, and Assumption 1 in the appendix. \\u03b2 represents the radius of a confidence ball, a standard concept in the contextual bandit literature, as used in LinUCB, KernelUCB, etc. This oversight demonstrates the reviewer\\u2019s unfamiliarity with foundational concepts in the field.\\n\\n3. **Misinterpretation of the Attention Mechanism as a Weakness** \\n The reviewer critiqued the fast decay of \\u03b1 in specific scenarios as a weakness, despite it being a key strength of our model. When local and global rewards align, this decay optimally shifts focus to exploitation, a desirable property for contextual bandits. Again, this shows the reviewer\\u2019s unfamiliarity with fundamental concepts in multi-armed bandits such as the exploitation-exploration tradeoff.\\n\\n4. **Overwhelmed by Long Responses** \\n The reviewer expressed feeling overwhelmed by the length of our responses. We feel such comments are unwarranted given that our clarifications were in response to the reviewer\\u2019s questions, some of which were on items already clearly stated in our submitted manuscript.\\n\\n---\\n\\n## Reviewer j6kr\\n\\nReviewer j6kr's repeated questions on concepts explicitly addressed in the manuscript, such as the dynamic adjustment of k in k-NN, were concerning. The reviewer expressed confusion about the setup, which appears to stem from either unfamiliarity with the contextual bandit domain or a lack of engagement with the paper, as these concepts were clearly detailed in the manuscript. This misalignment resulted in unnecessary repetition of explanations, unnecessarily lengthening our responses.\\n\\n---\\n\\n## Reviewer AgPg\\nReviewer AgPg suggested additional comparisons and an expanded literature review. Initially, we provided results for 3 datasets and 14 models. Following the reviewer\\u2019s feedback, we extended the analysis to **9 datasets and 19 models**. To our knowledge, we have provided one of the most comprehensive comparisons in the contextual bandit literature.\\n\\nAdditionally, Reviewer AgPg insisted on his/her personal and subjective opinion that Table 1 in the paper, which summarizes our contributions, is unnecessary and should be deleted because it does not include empirical results. This criticism is surprising, as the table is located in **the introduction section** and is intended to highlight contributions, not present empirical findings. Such tables are a standard practice in academic papers, as seen in **Table 1b in [1] and Table 1 in [2]**, both published at ICLR last year. Removing a contribution summary from the introduction simply because it lacks empirical results contradicts established norms in the field.\\n\\nAdditionally, Reviewer AgPg dismissed the importance of our theoretical guarantees and regret bound proofs, which is surprising given their foundational role in contextual bandit research. This remark contradicts the value placed on theoretical analysis in the community.\\n\\n\\n---\\n\\n## Reviewer 7MDe\\n\\nWe are disappointed that Reviewer 7MDe reduced their score from 8 to 6 without providing any explanation or engaging in further discussion. We are deeply disappointed about this lack of transparency and unprofessionalism. We feel that this does not align with the principles of constructive peer review.\\n\\n\\n---\\n\\n\\nWhile we deeply respect ICLR\\u2019s commitment to scientific rigor, this experience has highlighted issues of fairness and consistency in the review process. Despite the detailed responses and extensive revisions that we have provided, several reviewers overlooked critical aspects of our work, failed to engage constructively, or exhibited unfamiliarity with the field.\\nWe hope these reflections will contribute to improving the review process in the future and ensure a fairer evaluation for all submissions.\\n\\n\\n\\n[1] Goktas, Denizalp, et al. \\\"Efficient Inverse Multiagent Learning.\\\" The Twelfth International Conference on Learning Representations (2024).\\n\\n[2] Seunghan Lee et al., \\\"Soft Contrastive Learning for Time Series.\\\" The Twelfth International Conference on Learning Representations (2024).\"}", "{\"comment\": [\"I would like to thank the authors for the response. I am with Reviewer 1LHD that a more clear response is desired. Still, I would like to see some further clarifications from the authors.\", \"**W1, W2, W7.** I understand the intention of the authors to have the k-NN in the reward model to capture the impact of previous rewards on later ones. Let me probably ask this question first before any further ones: is $k_t^a$ a parameter that is defined clearly in the problem formulation, e.g., as a pre-fixed sequence? Now I only see the selection of it in the later algorithm designs but not in the problem formulation (which should stand alone without the algorithm design).\", \"**W3.** If it is a part of the algorithm design, please remove it from the problem formulation.\", \"**W4.** I feel a bit lost in the response. Probably also let me clarify what is the difference between the environment-generated context $x_t$ and the optimal context for each arm $(x^{a}_t)^*$.\", \"It would be nice if the authors can first answer the above questions and then I can move to the other parts of the response, especially the regret.\"]}", "{\"summary\": \"The paper introduces LNUCB-TA, a contextual multi-armed bandit model that combines adaptive k-Nearest Neighbors with LinUCB along with adaptive exploration rate, provide sub linear regret guarantee and measure empirical performance against standard bandit algorithms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper's presentation and clarity is good.\", \"To the best of my knowledge, the proofs in the main paper seem fine.\"], \"weaknesses\": \"1. One of my main concerns is the implicit reward structure in eq (1). The authors assume that the true expected reward is a sum of a linear and the average of the rewards of a set of nearest neighbors. It should be noted that the k-Nearest Neighbour UCB (Reeve et al., 2018) did not assume such a decomposition, but rather make a margin and lipschitz assumption (see Assumption 1 and 3 in (Reeve et al., 2018)). Why do the authors need to make such a restrictive assumption?\\n\\n2. In the *Existing Gaps and Intuition* section the authors say \\\"Linear models, constrained by static parameter updates, often fail in scenarios with inherently nonlinear relationships between contextual features and rewards.\\\" However the authors do not discuss a series of works in non-linear bandits (see [1],[2],[3],[4],[5],[6],[7], [8],[9],[10]) and the author's contribution with respect to these works.\\n\\n3. The authors talk about temporal or time-dependent changes in the reward, but do not again discuss the development in non-stationary bandits or discuss why those solutions are not effective in the current scenario (see eg. https://proceedings.mlr.press/v75/luo18a/luo18a.pdf, https://proceedings.mlr.press/v99/chen19b.html, https://arxiv.org/abs/2310.07786 and the references therein).\\n\\n4. Since the authors stress upon non-linearity of reward functions, there are missing benchmarks in the Experiments Section specifically with respect to NeuralUCB [5], Neural Thompson Sampling [6], Neural SquareCB/FastCB [7], Neural-Linear [3].\\n\\n[1] Carlos Riquelme, George Tucker, and Jasper Snoek. Deep bayesian bandits showdown: An empirical comparison of bayesian deep networks for thompson sampling. In International Conference on Learning Representations, 2018.\\n\\n[2] Xiuyuan Lu and Benjamin Van Roy. Ensemble sampling. In Proceedings of the 31st International\\nConference on Neural Information Processing Systems, NIPS\\u201917, pp. 3260\\u20133268, Red Hook, NY,\\nUSA, 2017. Curran Associates Inc. ISBN 9781510860964.\\n\\n[3] Tom Zahavy and Shie Mannor. Neural linear bandits: Overcoming catastrophic forgetting through\\nlikelihood matching, 2020. \\n\\n[4] M. Valko, N. Korda, R. Munos, I. Flaounas, and N. Cristianini. Finite-time analysis of kernelised contextual\\nbandits. arXiv preprint arXiv:1309.6869, 2013.\\n\\n[5] Dongruo Zhou, Lihong Li, and Quanquan Gu. Neural contextual bandits with ucb-based exploration.\\nIn International Conference on Machine Learning, pp. 11492\\u201311502. PMLR, 2020.\\n\\n[6] Weitong Zhang, Dongruo Zhou, Lihong Li, and Quanquan Gu. Neural thompson sampling. In\\nInternational Conference on Learning Representation (ICLR), 2021.\\n\\n[7] Rohan Deb, Yikun Ban, Shiliang Zuo, Jingrui He, and Arindam Banerjee. Contextual bandits with\\nonline neural regression. In The Twelfth International Conference on Learning Representations,\\n2024a.\\n\\n[8] D. Foster and A. Rakhlin. Beyond ucb: Optimal and efficient contextual bandits with regression oracles. In\\nInternational Conference on Machine Learning, pages 3199\\u20133210. PMLR, 2020.\\n\\n[9] D. Simchi-Levi and Y. Xu. Bypassing the monster: A faster and simpler optimal algorithm for contextual\\nbandits under realizability. ArXiv, abs/2003.12699, 2020.\\n\\n[10] D. J. Foster and A. Krishnamurthy. Efficient first-order contextual bandits: Prediction, allocation, and\\ntriangular discrimination. Advances in Neural Information Processing Systems, 34, 2021.\", \"questions\": [\"Could the authors clarify why they make the specific restrictive assumption on the reward function in eq (1), specifically in the context of existing algorithms for non-linear reward functions?\", \"Could the authors specify in more detail how their solution differs from the k-Nearest Neighbour UCB (Reeve et al., 2018) along with a detailed explanation of the following sentence from Existing gaps and intuition: \\\"While nonlinear approaches like k-NN-based models (Reeve et al., 2018) offer flexibility, they often struggle with computational efficiency and adaptability in dynamic environments.\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer AgPg (Part 4/4)\", \"comment\": \"**Continue of W2-4**\", \"table_2\": \"Comparison of Contextual Bandit Algorithms Across Modeling, Attention, and Non-Linearity Features\\n\\n| **Algorithm** | **Linear Modeling** | **Local History Modeling** | **Attention Mechanism** | **Non-Linearity Handling** |\\n|-------------------------------|---------------------|----------------------------|--------------------------|-------------------------------------------|\\n| **UCB** | No | No | No | None |\\n| **KL-UCB** | No | No | No | None |\\n| **k-NN UCB** | No | Yes | No | k-NN adjustment |\\n| **k-NN KL-UCB** | No | Yes | No | k-NN adjustment |\\n| **LinThompson** | Yes | No | No | None |\\n| **LinThompsonUCB** | Yes | No | No | None |\\n| **LinUCB** | Yes | No | No | None |\\n| **Deep Bayesian Bandits** | No | No | No | Deep neural networks |\\n| **Ensemble Sampling** | No | Yes | No | Ensemble diversity |\\n| **Neural Linear Bandits** | Yes | Yes | No | Neural network-based feature mapping |\\n| **Kernelized Contextual** | No | Yes | No | Kernel-based non-linearity |\\n| **NeuralUCB** | No | Yes | No | Neural feature mappings |\\n| **Neural Thompson Sampling** | No | Yes | No | Non-linearity via neural networks |\\n| **Online Neural Regression** | No | Yes | No | Neural network-based regression |\\n| **Optimal Contextual Bandit** | No | Yes | No | Regression oracle-based |\\n| **FALCON** | No | Yes | No | Regression oracle-based non-linearity |\\n| **FastCB** | No | Yes | No | Regression oracle-based |\\n| **LNUCB-TA** | Yes | Yes | Yes | k-NN for local non-linearity |\\n\\n\\nReferences presented cover a broad spectrum of approaches, ranging from Bayesian sampling and neural networks to kernel-based methods, all of which address contextual bandit challenges with distinct strategies for non-linearity and exploration versus exploitation. The Deep Bayesian Bandits Showdown and Neural Thompson Sampling algorithms utilize Bayesian posterior sampling to approximate exploration strategies but lack mechanisms for adapting these strategies in real-time. Ensemble Sampling and Neural Linear Bandits are models that utilize ensemble networks and neural feature mappings in order to address non-linear relationships, but these methods do not incorporate dynamic local adjustments in non-stationary environments or provide consistent adaptation. In Kernelized Contextual Bandits and NeuralUCB, nonlinearity is captured using kernel or neural embeddings, which effectively model complex reward functions. However, they have fixed parameters that limit their adaptability to evolving reward structures and do not have a mechanism for dynamically adjusting exploration rates.\\n\\nAs an alternative, we propose a LNUCB-TA model that combines a global linear model with a k-NN-based non-linear adjustment, providing a more versatile approach to handling dynamic and complex environments. Unlike any of the ten previously mentioned models, LNUCB-TA introduces a novel attention-based mechanism to continuously balance exploration and exploitation based on recent arm performance. Our model's dual structure for global and local patterns, coupled with our adaptive control, enables consistent performance across non-stationary scenarios without extensive tuning.\"}", "{\"comment\": \"Thank you for sharing your perspective on the tables in our response. We truly appreciate your engagement and your apology\\u2014please rest assured, no offense was taken.\\n\\nThe purpose of Table 1 in our initial response was to summarize the differences between our model and the ten additional papers you kindly asked us to discuss, focusing on various methodological aspects. Given the 5000-character limitation for responses, the table was designed to be concise and structured, providing an accessible overview while following the constraints. \\n\\nRegarding Table 2 in our initial response, we appreciate your feedback. This table is included in the introduction section of the paper (Table 1 in the paper) to outline our contributions , rather than in the results section, where empirical evidence is typically discussed. Its purpose is to provide a clear summary of our key innovations (summarize what our contributions are and how they differentiate our approach), aligning with common practices in academic papers. Similar examples include Table 1b in [5] and Table 1 in [6] from recent ICLR publications. We hope this explanation clarifies the intent and utility of this table in the context of our work.\\n\\nWe sincerely appreciate your acknowledgment of the theoretical bounds we provided. However, we are still a bit surprised that this level of analysis is considered insufficient. To provide additional context, the regret bounds of several key models are as follows:\\n\\n- **NPR**: $\\\\tilde{O}(\\\\tilde{d}\\\\sqrt{T})$ \\n- **NeuralUCB**: $\\\\tilde{O}(\\\\tilde{d}\\\\sqrt{T})$ \\n- **NeuralTS** and **Our Model**: $\\\\tilde{O}(\\\\sqrt{dT})$\\n\\nOur regret bound is comparable to these models, particularly given that **we achieve this without relying on neural networks**, which are known for their computational complexity, data demands, and sensitivity to hyperparameter tuning. Achieving such theoretical guarantee underscores the practicality and robustness of our approach for real-world applications.\\n\\nAdditionally, we have now provided comparisons with neural-based methods such as NeuralUCB and NeuralTS, addressing your primary concern. We hope these updates further demonstrate the relevance and effectiveness of our model in this context.\\n\\n\\nWe hope this now satisfies your concerns.\\n\\nSincerely\\n\\nThe Authors\\n\\n\\n---\\n\\n\\n**References**\\n\\n[1] Weitong Zhang, Dongruo Zhou, Lihong Li, and Quanquan Gu. Neural thompson sampling. In International Conference on Learning Representation (ICLR), 2021.\\n\\n[2] Kassraie, Parnian, and Andreas Krause. \\\"Neural contextual bandits without regret.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2022.\\n\\n[3] Dongruo Zhou, Lihong Li, and Quanquan Gu. Neural contextual bandits with ucb-based exploration. In International Conference on Machine Learning, pp. 11492\\u201311502. PMLR, 2020.\\n\\n[4] Jia, Yiling, et al. \\\"Learning neural contextual bandits through perturbed rewards.\\\" arXiv preprint arXiv:2201.09910 (2022).\\n\\n[5]Goktas, Denizalp, et al. \\\"Efficient Inverse Multiagent Learning.\\\" The Twelfth International Conference on Learning Representations (2024).\\n\\n[6] Seunghan Lee et al., \\\"Soft Contrastive Learning for Time Series.\\\" The Twelfth International Conference on Learning Representations (2024).\"}", "{\"comment\": \"Dear Reviewer j6kr,\\n\\nWe would like to express our sincere gratitude for your valuable insights and suggestions on our work. We have tried our best to address the concerns and queries you raised during the rebuttal process. However, we would greatly appreciate knowing whether our response has effectively resolved your doubts. Your feedback will be instrumental in improving the quality of our work. As the end of the discussion period is approaching, we eagerly await your reply before the end.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer 1LHD (Part 1/5)\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback and insightful comments. Please kindly find our response to address your Questions.\\n\\n**Q1 (Adaptive kNN):**\\n\\n### Limitations of Purely Linear Models in Contextual Bandits\\nIn a standard linear contextual bandit model, the expected reward given a context $X_t$ is modeled as:\\n\\n$$\\nE[Y_t \\\\mid X_t] = \\\\mu_t^\\\\top X_t\\n$$\", \"where\": [\"$f_\\\\theta(x_{(t,a)})$: The neural network output parameterized by weights $\\\\theta_{(t-1)}$, which models the nonlinear reward as a function of the context $x_{(t,a)}$.\", \"$g(x_{(t,a)}; \\\\theta_{(t-1)})$: The gradient of the neural network's output with respect to its weights, capturing the influence of the context on the learned function.\", \"$Z_{(t-1)}$: The regularized covariance matrix incorporating feature embeddings up to round $t-1$.\", \"In spite of the fact that Neural UCB is flexible because the neural network approximates complex, nonlinear reward functions, it has several limitations:\", \"**Fixed Network Structure:** During training, the neural network architecture (e.g., number of layers, neurons) is fixed, resulting in predefined complexity that cannot be adjusted to new patterns or reward structures as they arise.\", \"**Adaptivity Constraints:** Updating a neural network to reflect new reward patterns is computationally expensive and may result in slow adaptation, especially in non-stationary environments.\", \"### Our Approach: Dynamic k-NN Mechanism\", \"We distinguish our approach from kernelized and neural approaches by selecting relevant historical rewards dynamically without reliance on fixed mappings or pre-trained network structures through the adaptive k-NN component. Unlike Kernel-UCB or Neural-UCB, our approach allows the reward model to adjust its neighborhood size and content dynamically based on the reward variance. As the model incorporates this flexibility for **each arm** and **at every time step**, it is particularly suited to non-stationary environments without the computational overhead associated with neural networks.\"], \"this_dynamic_mechanism_can_be_intuitively_simplified_as\": \"$$\\nE[Y_t \\\\mid X_t] \\\\approx \\\\mu_t^\\\\top X_t + \\\\frac{1}{k} \\\\sum_{i \\\\in N(X_t)} Y_i\\n$$\\n\\n- $\\\\mu_t^\\\\top X_t$: Captures broad patterns (global trend).\\n- $\\\\frac{1}{k} \\\\sum_{i \\\\in N(X_t)} Y_i$: Captures localized variations (local adjustment).\\n- The global trend term identifies broad, overall patterns in the reward structure.\\n- The k-NN term dynamically fine-tunes this by incorporating immediate, localized variations in the context space, enabling the model to make quick and context-sensitive adjustments.\\n\\n\\nFor more in detail comparison of our model and the literature highlighting the contribution of our model, please kindly refer to our response to Reviewer 1 (AgPg), Parts 3 and 4.\\n\\n**Q2 (Specific Scenario):**\\n\\nIn this scenario, the fast decay of $\\\\alpha_{N_t^a}$ is actually a strength, since it prioritizes exploitation while maintaining adaptability in dynamic situations.\\n\\n### Why Fast Decay is Beneficial under $g = n_t^a$\\n- **Reduced Need for Exploration:** \\n If $g = n_t^a$, the rewards for all arms are close to each other, indicating a limited degree of variability in reward outcomes. In this case, the algorithm does not have to explore aggressively, as each arm's reward pattern is closely aligned with the overall average reward pattern. Therefore, a fast decay under this assumption is advantageous, since it reduces exploration and allows the algorithm to concentrate on exploiting known rewards more efficiently.\"}", "{\"title\": \"Response to Reviewer 7MDe (Part 2/3)\", \"comment\": \"**Q3 (Other Domains):**\\n\\nWe appreciate your comment regarding the application of our model in other domains like healthcare or finance. To address this, we evaluated the performance of our model, LNUCB-TA, on the widely studied Warfarin dataset, which is a benchmark dataset used for modeling personalized medicine decisions. Specifically, this dataset focuses on predicting the correct warfarin dosage for patients based on their clinical and genetic profiles. Below, we summarize the setup, dataset details, and results:\\n\\n1. **Dataset and Problem Setup:**\\n - As part of the Warfarin dataset, clinical and genetic characteristics of patients are provided as well as the optimal dosage of warfarin required for each patient. We categorize the dosages into three clinically relevant categories based on the setup described in Bastani and Bayati's *\\\"Online Decision-Making with High-Dimensional Covariates\\\"*:\\n - Low dosage (< 3mg/day),\\n - Medium dosage (3-7mg/day),\\n - High dosage (> 7mg/day). \\n Each category is treated as an arm in a 3-armed contextual bandit problem. In this setup, we use a binary reward function where the algorithm receives a reward of 1 if the predicted dosage matches the true dosage and 0 otherwise.\\n\\n2. **Model Performance Evaluation:**\\n - To empirically validate LNUCB-TA's performance, we compared it against purely linear (LinUCB) and purely non-linear (k-NN UCB) models using different exploration rates.\", \"table_1\": \"Comparison of Models based on Approximate Cumulative Rewards for Different Exploration Rates on Warfarin Dataset\\n \\n| **Model** | **Exploration Rate** | ~**Cumulative Reward (t = 5528)** |\\n|--------------|-----------------------|-----------------------------------|\\n| **LNUCB-TA** | 0.01 | **3000** |\\n| **LNUCB-TA** | 0.1 | **2900** |\\n| **LNUCB-TA** | 1 | **2400** |\\n| KNN-UCB | 0.01 | 1750 |\\n| KNN-UCB | 0.1 | 1750 |\\n| KNN-UCB | 1 | 1500 |\\n| LinUCB | 0.01 | 2100 |\\n| LinUCB | 0.1 | 1500 |\\n| LinUCB | 1 | 1750 |\\n - **Key Insights from the Results:**\\n - As compared to LinUCB and k-NN UCB, LNUCB-TA consistently achieved higher cumulative rewards across all exploration rates. This demonstrates its ability to leverage both global (linear) trends and local (non-linear) patterns, which is crucial in healthcare settings where patient responses may vary significantly based on both broad population-level patterns and individual characteristics.\\n - In spite of LinUCB's ability to identify global patterns, its inability to account for local variations limits its applicability to such datasets. Although k-NN UCB incorporates non-linear components, it faces challenges in higher-dimensional feature spaces, resulting in lower performance compared to LNUCB-TA. \\n In light of the results, LNUCB-TA appears to be particularly suitable for personalized decision-making problems in healthcare, where both structured clinical data (global trends) and individual data (local variations) may be present. Also, as a result of its attention mechanism, it is able to adapt to changing environments, which is a critical aspect of healthcare decision-making in a dynamically changing environment.\\n\\nThe new figure representing the exact values on the Warfarin dataset will be added to the revised version of the paper.\\n\\n\\n\\n**Q4 (Fixed Weights):**\\n\\nWe would like to highlight that this aspect has been acknowledged in our paper under the Limitation and Future Direction section. Despite the fact that the current fixed-weight approach simplifies the model and facilitates interpretation, we recognize that it may not fully capture the underlying data structure in certain domains and environments. Future studies may examine the possibility of: \\n\\n1. **Variable Weights:** Assigning different weights to the linear and non-linear components to better reflect the specific structure of the data. \\n2. **Dynamic Adjustment:** Adjusting these weights dynamically at the arm level or for specific time steps to respond to changing reward patterns based on attention mechanisms. \\n\\nWith these weights tunable, the model can be better adapted to environments where global trends and local patterns have varying relative importance. Although this level of tunability provides a greater degree of flexibility, it also poses challenges, such as increased computational complexity and additional hyperparameter optimization. We appreciate the reviewer's attention to this aspect, as it aligns well with the future directions proposed in our study. We view this as a promising direction for enhancing our model's adaptability and robustness.\"}", "{\"comment\": \"Thank you for your quick response.\\n\\nWe will update the supplementary materials to include the code for the new added models, along with all necessary reproducibility information, as part of the revisions based on the ongoing feedback.\\n\\nWe now have provided results for 16 models across four datasets, which we believe is significantly comprehensive compared to standard practice in the bandit literature. While Neural Thompson Sampling and Neural SquareCB/FastCB are important baselines in the context of neural-based methodologies, our model fundamentally diverges from such neural approaches in its design and focus. Moreover, when we discuss \\\"adapting to non-stationary environments,\\\" we refer specifically to handling non-stationary real-world datasets, not adopting a framework designed solely for theoretical non-stationary bandits. Our chosen datasets include datasets with inherently non-stationary characteristics, making them highly relevant for evaluating the adaptability of our model. Additionally, the reward structure in our model has been theoretically validated with a sublinear regret bound. Theoretical guarantees have been provided in the manuscript to substantiate the soundness of this structure. It is surprising that this aspect has been overlooked by the respected reviewer in terms of its **theoretical proof**, which **directly demonstrates why the community should care** about such an assumption, as it is comprehensively justified and foundational to the contributions presented in the paper.\\n\\nIn summary, **the main baselines relevant to our setup are LinUCB, k-NN UCB, k-NN KL UCB, the vanilla Lin+k-NN UCB combination, LinThompson, and KernelUCB, all of which have been extensively compared in our paper**. Additionally, our reward structure has been supported with theoretical proof for the sublinear regret bound. This further establishes the validity of our approach. Even in the NeuralUCB and Neural Thompson Sampling papers, comparisons are limited to 6\\u20137 models most related to their setup, and not every possible baseline. We believe our approach aligns with standard practices in this regard while providing both empirical and theoretical justification for our contributions.\\n\\n\\nRespectfully, we note the inconsistency in the reviewer\\u2019s feedback. While earlier comments criticized the detailed responses as a \\\"**huge pile of text**,\\\" the current critique suggests that Table-1 is \\\"**exceedingly terse**.\\\" This contradiction highlights the challenge of striking the right balance between brevity and comprehensiveness in presenting comparisons. The descriptions provided in Part 4 of our answer further clarify their purpose. Regarding Table-2, **it is not meant** to argue \\\"**why**\\\" linear modeling or attention mechanisms are needed but rather to summarize **what** our contributions are and how they differentiate our approach (please refer to Table 1 of the paper). \\n\\nFor the reasons (infer why and if anyone needs), please refer to the section on \\\"Existing Gaps and Intuition\\\" (**lines 75\\u201397**) and for the motivating examples in real-world, please refer to the dedicated subsection (**lines 130-147**). For additional intuition on each contribution, dedicated sections are provided (**lines 244\\u2013251, 280\\u2013285, 326\\u2013331**). We respectfully request that the reviewer refer to these sections for a fuller understanding of our motivations and contributions.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Hi authors,\\n\\n1. I have gone over some parts of your response. I will take this into consideration while formulating the final rating during the discussion phase.\\n2. I just want to highlight that it is not a good rebuttal technique to give an overwhelming amount of response to a reviewer (or all reviewers). If the draft requires so much explanation and clarification it is a marker that the paper is not well written.\"}", "{\"comment\": \"Dear Reviewer AgPg,\\n\\nThank you for your time and comments; we have conducted the experiment you requested, as detailed below.\\n\\n---\", \"table_4\": \"Comparison of our model vs neural bandit models based on total regret (Mean \\u00b1 Standard Deviation and relative Std/Mean percentage over 20 runs)\\n| Dataset | Linear UCB | Linear TS | Kernel UCB | Kernel TS | BooststrapNN | eps-greedy | NeuralUCB | NeuralTS | LNUCB-TA (our proposed model) |\\n|-----------|------------------|------------------|------------------|------------------|------------------|-----------------|-----------------|-----------------|-------------------|\\n| Adult | 2097.5 \\u00b1 50.3 (2.40%) | 2154.7 \\u00b1 40.5 (1.88%) | 2080.1 \\u00b1 44.8 (2.15%) | 2111.5 \\u00b1 87.4 (4.14%) | 2097.3 \\u00b1 39.3 (1.87%) | 2328.5 \\u00b1 50.4 (2.16%) | 2061.8 \\u00b1 42.8 (2.08%) | 2092.5 \\u00b1 48.0 (2.29%) | **1673.1** \\u00b1 **12.07** **(0.72%)** |\\n| Magic | 2604.4 \\u00b1 34.6 (1.33%) | 2700.5 \\u00b1 46.7 (1.73%) | 2406.5 \\u00b1 79.4 (3.30%) | 2442.6 \\u00b1 64.5 (2.64%) | 2269.4 \\u00b1 27.9 **(1.23%)** | 2381.8 \\u00b1 37.3 (1.57%) | 2033.0 \\u00b1 48.6 (2.39%) | 2037.4 \\u00b1 61.3 (3.01%) | **1931.6** \\u00b1 **31.22** (1.62%) |\\n| MNIST | 2544.0 \\u00b1 235.4 (9.25%) | 2781.4 \\u00b1 338.3 (12.16%)| 3595.8 \\u00b1 580.1 (16.13%)| 3406.0 \\u00b1 411.7 (12.09%)| 1765.6 \\u00b1 321.1 (18.19%)| 1893.2 \\u00b1 93.7 (4.95%) | 2071.6 \\u00b1 922.2 (44.49%)| 1583.4 \\u00b1 198.5 (12.53%)| **1561.6** \\u00b1 **42.09** **(2.69%)** |\\n| Mushroom | 562.7 \\u00b1 23.1 **(4.11%)** | 643.3 \\u00b1 30.4 (4.72%) | 199.0 \\u00b1 41.0 (20.60%) | 291.2 \\u00b1 40.0 (13.74%) | 132.3 \\u00b1 8.6 (6.50%) | 323.2 \\u00b1 32.5 (10.06%) | 160.4 \\u00b1 95.3 (59.41%) | 115.0 \\u00b1 35.8 (31.13%) | **19.85** \\u00b1 **1.98** (9.97%) |\\n| Shuttle | 966.6 \\u00b1 **39.0** **(4.04%)** | 1020.9 \\u00b1 42.8 (4.19%) | **166.5** \\u00b1 39.4 (23.66%) | 283.3 \\u00b1 180.5 (63.72%)| 211.7 \\u00b1 20.9 (9.87%) | 682.0 \\u00b1 79.8 (11.70%) | 338.6 \\u00b1 386.4 (114.13%)| 232.0 \\u00b1 149.5 (64.45%)| 283.1 \\u00b1 55.44 (19.58%) |\\n\\nThis table compares the performance of total regret and standard deviation across datasets, including results from the NeuralTS paper and our proposed LNUCB-TA model. **The regret values for the baseline models are taken directly from the NeuralTS paper [1]** (Figure 1, Table 1, and Figure 3), where a detailed grid search for hyperparameter tuning was conducted (as described in Section A.1 [1]). Consequently, the proper hyperparameter tuning mentioned by the respected reviewer is inherent in the baselines, ensuring a fair comparison.\\n\\nThe respected reviewer has explicitly noted that *\\\"proper hyperparameter tuning (including the number of layers, width of the network, and step size)\\\"* is critical for the performance of neural-based bandit models. This observation aligns with the findings in the NeuralTS paper, which highlights significant variability in performance and high sensitivity to hyperparameter tuning, as evidenced in their Figure 2, Figure 4, and Table 2. This sensitivity underscores the inherent challenges of using neural models in contextual bandit frameworks. Furthermore, the computational efficiency is another challenge in neural-based models and these models mainly use diagonalized matrix [1-3]. For instance, Figure 5 in the NeuralTS paper demonstrates the high runtime required for exploration in neural bandits.\\n\\nPractical concerns about the computational cost in exploration are significant for neural bandits like NeuralUCB and NeuralTS. These models require the construction of high-probability confidence sets based on the dimensionality of network parameters and context vector representations, often involving matrices with hundreds of thousands of parameters. As a result, approximations (e.g., only using diagonal covariance matrices) are employed to mitigate this computational burden [2, 3], but these approximations lack theoretical guarantees, creating gaps between theoretical and empirical performance [4].\\n\\nThe Neural bandit with perturbed reward (NPR) model [4] tries to addresses computational efficiency in neural contextual bandits but highlights that online model updates in neural bandit models, relying on stochastic gradient descent **over entire training sets at each round**, remain a significant computational bottleneck. In contrast, our hybrid model design significantly mitigates this bottleneck by employing computationally efficient methods that avoid the iterative gradient descent updates required by neural network-based approaches.\\n\\nFurthermore, the NPR model\\u2019s regret performance is equal to or worse than NeuralUCB on datasets such as Adult, Mushroom, and Magic (Figures 2 and 6 in [4]). In comparison, **our proposed LNUCB-TA outperforms NeuralUCB across all evaluated datasets as shown in the table above** and demonstrates the least standard deviation in 4 out of 5 datasets, showcasing its robustness. Additionally, the results presented in Table 2, Figure 2 (part d), and Figure 6 (error bar plot) of our paper emphasize the consistency and reliability of our approach across varying parameter settings.\"}", "{\"comment\": \"Thank you for your thoughtful response.\\n\\nWe acknowledge your perspective that a clear separation between problem formulation and algorithm design is ideal for defining optimality and learning targets. However, in our specific case, the dynamic adjustment of $k$ in the $k$-NN mechanism is inherently tied to the problem's objective of adapting to changing reward structures in non-stationary environments. This dynamic nature of $k$ is not merely an algorithmic choice but a fundamental component of how the problem itself is framed, as it directly impacts the reward estimation process.\\n\\nOur inclusion of the dynamic $k$-NN mechanism in the problem formulation is consistent with practices in bandit models for example NeuralUCB [1]. NeuralUCB defines the reward function $h(x)$ as part of the problem setting (equation 2.2), which inherently depends on the neural network's structure and properties (equation 2.3 in the same problem setting section). Similarly, our model incorporates the $k$-NN adjustment in the problem definition because it is essential for capturing local reward dependencies, just as NeuralUCB's $h(x)$ models non-linear reward functions.\\n\\nThat said, we will revise the manuscript to ensure that the problem formulation is presented more distinctly, highlighting the learning target and leaving implementation details to the algorithmic section. We appreciate your feedback as it will help us refine the clarity and structure of our paper.\\n\\nWe are glad that the clarification on the contexts was helpful and addressed your concerns in that regard.\\n\\n[1] Zhou, D., Li, L. and Gu, Q., 2020, November. Neural contextual bandits with ucb-based exploration. In International Conference on Machine Learning (pp. 11492-11502). PMLR.\"}", "{\"title\": \"Response to Reviewer 7MDe (Part 1/3 )\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback and insightful comments. Please kindly find our response to address your Questions (Q).\\n\\n**Q1 (Temporal Attention):**\\n\\nTemporal attention modifies the exploration term dynamically by introducing an adaptive scaling factor \\\\( \\\\alpha_{N_t^a} \\\\), which incorporates both global trends and local arm-specific behavior:\\n$\\\\alpha_{N_t^a} = \\\\frac{\\\\alpha_0}{N_t^a + 1} \\\\cdot \\\\left( \\\\kappa g + (1 - \\\\kappa) n_t^a \\\\right).$ This scaling directly influences the UCB confidence bound as mentioned in Equation (7) of the paper. As elaborated in Section 3.3, this adjustment integrates additional temporal and contextual information into the reward estimation process, as detailed below:\\n\\n### Global Reward Trends\\nGlobal attention summarizes the overall reward trends across all arms, reflecting system-wide behavior. The model adjusts exploration based on how an arm's rewards compare to the global average. This ensures that the confidence bound reflects not just the arm's own history but also its relative performance in the broader context.\\n\\n### Local Arm-Specific Performance\\nLocal attention captures the recent reward behavior specific to the chosen arm, focusing on its individual dynamics. This ensures that the confidence bound is sensitive to arm-specific changes, such as sudden improvements or deteriorations in reward patterns.\\n\\n### Temporal Adaptivity \\n\\nThe term ($\\\\frac{\\\\alpha_0}{N_t^a + 1}$) decays over time, ensuring that the exploration term diminishes as the arm is sampled more often, leading to more confident estimates. This decay is modulated by the GALA concept, allowing the model to dynamically adjust the rate of decay based on specific reward dynamics.\\n\\n### Context-Aware Confidence Bounds\\nUltimately, temporal attention enhances the estimated reward quantity by using Context-Aware Confidence Bounds, specifically incorporating the GALA concept. As a result of context-aware adjustment, reward estimates are more accurate, especially in dynamic and non-stationary environments.\\n\\nThe exploration parameter in standard UCB is typically static or only weakly adaptive, relying on simple decay or a fixed schedule. Temporal attention expands this concept by introducing dynamic exploration scaling, which adjusts exploration according to real-time reward trends at both global and local levels.\\n\\n**Q2 (Theoretical Regret):**\\n\\nThank you for your valuable feedback and the suggestion to provide a detailed theoretical comparison between LNUCB-TA and other algorithms. Below, we address your point by summarizing the regret guarantees for key baseline algorithms and highlighting the distinctions in their theoretical performance.\\n\\n## Theoretical Comparison\\n\\nThe regret bounds for LNUCB-TA and other contextual bandit algorithms are summarized in the table below. This table highlights how LNUCB-TA compares theoretically to algorithms like LinUCB, Kernel-UCB, and Neural-UCB:\\n\\n| **Algorithm** | **Regret Bound** | **Strengths** | **Weaknesses** |\\n|---------------|------------------------|--------------------------------------------------|--------------------------------------------------|\\n| **LNUCB-TA** | $\\\\( \\\\tilde{O}(\\\\sqrt{dT}) \\\\)$ | Adaptive to global and local patterns; real-time exploration adjustment | Requires k-NN tuning; computational cost higher than purely linear models |\\n| **LinUCB** | $\\\\( O(\\\\sqrt{dT \\\\log T}) \\\\)$ | Simple; effective for stationary, linear settings | Fails in non-linear or non-stationary environments |\\n| **Kernel-UCB**| $\\\\( O(\\\\sqrt{T}) \\\\)$ | Captures non-linearities with kernels | Fixed kernel limits adaptability; computationally expensive |\\n| **Neural-UCB**| $\\\\( \\\\tilde{O}(\\\\sqrt{T}) \\\\)$ | Models complex non-linear relationships | High computational cost; slower adaptation in non-stationary settings |\\n\\n## Key Takeaways\\n\\n1. The LNUCB-TA algorithm achieves a regret bound of \\\\( \\\\tilde{O}(\\\\sqrt{dT}) \\\\), which is comparable to other state-of-the-art algorithms such as LinUCB and Kernel-UCB. However, unlike Kernel-UCB or Neural-UCB, our model is inherently adaptive, making it suitable for non-stationary environments without retraining or kernel selection.\\n\\n2. Several baseline models, such as LinUCB and Neural-UCB, assume either stationarity or a fixed reward structure, which limits their effectiveness in dynamic environments. Through the use of its temporal attention mechanism and adaptive k-NN component, LNUCB-TA overcomes these limitations, enabling it to adapt to both local and global patterns as the environment changes.\\n\\n3. By incorporating both a global linear model and an adaptive k-NN component, LNUCB-TA achieves a dual focus of capturing global trends while adapting to local variations. Using this unique approach, the model remains robust in a variety of scenarios, from simple linear trends to highly non-linear ones.\"}" ] }
BwR8t91yqh
Interactive Speculative Planning: Enhance Agent Efficiency through Co-design of System and User Interface
[ "Wenyue Hua", "Mengting Wan", "JAGANNATH SHASHANK SUBRAMANYA SAI VADREVU", "Ryan Nadel", "Yongfeng Zhang", "Chi Wang" ]
Agents, as user-centric tools, are increasingly deployed for human task delegation, assisting with a broad spectrum of requests by generating thoughts, engaging with user proxies, and producing action plans. However, agents based on large language models often face substantial planning latency due to two primary factors: the efficiency limitations of the underlying LLMs due to their large size and high demand, and the structural complexity of the agents due to the extensive generation of intermediate steps to produce the final output. Given that inefficiency in service provision can undermine the value of automation for users, this paper presents a human-centered efficient agent planning method – Interactive Speculative Planning – aiming at enhancing the efficiency of agent planning through both system design and user interaction. Our approach advocates for the co-design of the agent system and user interface, underscoring the importance of an agent system that can fluidly manage user interactions and interruptions. By integrating human interruptions as a fundamental component of the system, we not only make it more user-centric but also expedite the entire process by leveraging human-in-the-loop interactions to provide accurate intermediate steps.
[ "large language model", "agent", "efficiency", "human-computer interaction" ]
Accept (Poster)
https://openreview.net/pdf?id=BwR8t91yqh
https://openreview.net/forum?id=BwR8t91yqh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zAKSCO4zpB", "xVb28AzwOx", "xIEbD1ADAl", "nBsyudXzSa", "hYa22oPSPP", "ddldHJBEn6", "XIlg0O4QfR", "X7DYwUdqlW", "TVIDUMdXDi", "T268RTg0ic", "RfUsfPt4HR", "HXCZBhT5y4", "GruMFkfDbG", "F7bYO0MQrb", "9ptrSKvrXU", "7e648Z9eaY", "6IZ7SdHSwz", "5v5YExOc78", "33Tu4b7I4V", "2duks0LvqI", "0bnqtLUIpM" ], "note_type": [ "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732019000017, 1730159825601, 1732019703526, 1734607888237, 1731919779394, 1733304077301, 1732019605472, 1732644401959, 1731918855139, 1732321700650, 1737523574017, 1732617553488, 1732663146009, 1732655698244, 1732321729561, 1730728495434, 1731918205275, 1732019967898, 1731921896236, 1732018962035, 1730839469529 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Reviewer_SCq5" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Area_Chair_5quc" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Reviewer_y5dF" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3408/Reviewer_SCq5" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Reviewer_y5dF" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Authors" ], [ "ICLR.cc/2025/Conference/Submission3408/Reviewer_SYAX" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal for Weakness 3\", \"comment\": \"> ***W3: In the experiments, the authors used k=4 -- why? Since k is central to the scheme, providing some guidance on selecting k would be beneficial. For example, could we employ dynamic tuning of k, etc?***\\n\\nThank you for raising this important point. **The choice of k=4 in our experiments is a heuristic decision aimed at balancing cost and acceleration speed.**\\n\\nBased on the accuracy breakdown analysis presented in the appendix, we observe that in the OpenAGI dataset, settings 1, 2, and 3 have an approximation agent with around 0.7 accuracy, while setting 4 has around 0.5 accuracy. **We selected k such that the probability of k sequential steps being correct is not too low.** For an accuracy of 0.7, the probability of three adjacent steps being correct is 34%, the probability of four adjacent steps being correct is 24%, and the probability of five adjacent steps being correct is 17%. Therefore, we subjectively chose k=4 to strike a balance. \\n\\n**Regarding the idea of dynamic k**, this is indeed a very interesting and promising approach. Some papers on speculative decoding, such as [1] have explored the possibility of a dynamic drafting step (k) based on the observation that the difficulty of predicting the next token varies across different contextual scenarios. They use a confidence threshold to stop the drafting model from further generation once the confidence score drops below it. A similar idea could be applied to speculative planning, where k could be dynamically adjusted based on the context and confidence levels.\\n\\nWe appreciate your feedback and will consider incorporating dynamic tuning of k in our future work to enhance the flexibility and efficiency of the framework in the future.\\n\\n[1] Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting\\n\\n**More detailed analysis on how to choose k can be found in https://openreview.net/forum?id=BwR8t91yqh&noteId=5v5YExOc78**\"}", "{\"summary\": \"The paper introduces a speculative planning framework to improve the time efficiency of agent-based task planning. The paper tackles reducing latency by implementing two agent types. An approximation agent generates initial, potentially incorrect steps, which the more accurate target agent validates. This setup allows speculative planning to proceed concurrently, ideally saving time without sacrificing accuracy. Through experiments with two agent planning benchmarks, OpenAGI and TravelPlanner validation and metrics such as time efficiency, API costs, etc, the paper demonstrates the efficiency gains. It also allows users to get involved in the planning process. However, this aspect is speculative and has not been tested.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) In terms of writing, the paper is well-organised and engaging. The planning algorithm is discussed well -- the speculative planning process is generally explained well. However, a more detailed discussion with a specific k-value around Fig 1 would be helpful in understanding the various scenarios and how these result due to the accuracy of A. For example, it will help the reader speculate how A's accuracy may impact the number of times replanning may be needed, etc.\\n\\n2) The paper evaluates various settings (e.g., ReAct, CoT, MAD, and DG), revealing a good level of depth in its potential impacts. The experiments are thorough, with 4 settings and 2 planning benchmarks (OpenAGI and TravelPlanner). Metrics such as time efficiency, API costs, and stepwise validation provide a good depth of analysis, both positive and negative aspects.\\n\\n3) In terms of novelty, while speculative planning may not be universally applicable, its potential to accelerate high-latency planning tasks is valuable for the AI community, especially for applications requiring real-time or near-real-time responses.\", \"weaknesses\": \"1) One downside, not fully addressed or discussed in the paper, is the increased cost. The paper largely supports its claims of time efficiency gains, presenting results indicating speculative planning can save time, especially in complex planning settings. However, cost savings are not fully supported, as all speculative settings incur higher costs. While this increase is expected, as A and T operate concurrently, the paper only highlights positives and does not discuss costs. To balance the discussion, the authors must discuss both sides: pros and cons in Sections 4.1 and 4.2. While time savings are the focus, I assume real-world deployments must balance cost and efficiency, particularly if A has very low accuracy and frequently diverges from the steps produced by T, which increases wasted resources. Therefore, an open discussion about this limitation must be expressed in the same units as efficiency is justified (%) and possibly what strategies may allow users to reduce the cost.\\n\\n2) Although the title suggests a \\\"co-design\\\" with user interface considerations, the paper provides limited insights into UI and user interactions. I find this aspect genuinely troublesome. The paper is technical. Even the UI elements, such as producing the steps in the correct order etc., are technical contributions. This is not co-design from an HCI perspective. I also have a few more questions on this aspect, which are provided in the Questions section later.\\n\\n3) The framework relies on hyperparameter k. In the experiments, the authors used k=4 -- why? The appendix does provide some insight into the sensitivity of this parameter. Still, since k is central to the scheme, providing some guidance on selecting k would be beneficial. For example, could we employ dynamic tuning of k, etc?\", \"minor_issues\": [\"048: and The sequential\", \"In Figure 1, please briefly state the reason for using Venmo as an example.\", \"Line 100: Statements like \\\"This strategy [[potentially]] reduces the time a target agent\\\" suggests either that this gain is not universal or that the authors have doubts about their results. If there are caveats regarding when we expect reduced time, then that needs to be made explicit.\", \"I did not find the Venmo case study or the diagrams helpful.\"], \"questions\": \"Since the cost is one of the main issues:\\n\\n1) Under what circumstances should each evaluation metric (e.g., accuracy vs. speed vs. cost) be prioritised, and how might these impact the framework's configuration, e.g., in terms of k? The cost increase is above 60% in some cases. For example, do the authors believe there is a case for solely focusing on cost and not efficiency and vice versa?\\n\\n2) Do the authors have insight into reducing costs to make the approach more appealing? The appendix states: \\\"...Implement a more sophisticated approximation-target judgment method...\\\" --- What does this look like?\\n\\nRegarding the ability of the users to intervene, etc.:\\n\\n3) How do you envision training end-users on the speculative planning interface, particularly in complex tasks where they may need to understand and manage concurrent agent processes?\\n\\n4) Also, given that user intervention is integral to your framework, how would you design the interface to facilitate timely and accurate user decisions?\", \"regarding_resource_constrained_environments\": \"5) What considerations should be explored for scaling the speculative planning framework in resource-constrained environments, especially where concurrent instances of T may lead to bottlenecks? \\n6) Since the approach's success largely depends on the accuracy of A, what do the results and analysis of k (in the appendix) indicate in terms of any minimum thresholds for A's accuracy?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply for Question 5 - 7\", \"comment\": \"> ***Q5: How do you envision training end-users on the speculative planning interface, particularly in complex tasks where they may need to understand and manage concurrent agent processes?***\\n\\nWe believe that **end-users won't require specialized training** to use the speculative planning interface. In our UI design, the underlying concurrent agent processes are abstracted away, presenting the planning as a standard multi-agent system involving two agents. From the user's perspective, it's a sequential interaction where an approximation agent suggests planning steps, and a target agent agrees or disagrees. Much like speculative decoding, users remain unaware of the backend mechanics. Our speculative planning approach is seamlessly integrated into the UI (**see page 6**), ensuring users can focus on their objectives without needing to understand or manage the concurrent processes behind the scenes.\\n\\n> ***Q6: Also, given that user intervention is integral to your framework, how would you design the interface to facilitate timely and accurate user decisions?***\\n\\n**In page 6, we discuss when and which agent\\u2019s result to present to the users on the user interface.** Specifically, the user interface only displays the approximation agent\\u2019s response based on a confirmed action trajectory. For instance, if the approximation agent quickly generates two sequential actions before the target agent confirms the first one, only the first action generated by the approximation agent will be shown. The system will then wait for the target agent's result on the first step before presenting the next message, which is the target agent\\u2019s result on the first step. If the target agent\\u2019s result aligns with the approximation agent\\u2019s result, the second step computed by the approximation agent will be presented, as it is based on a correct action trajectory.\\n\\n**Consequently, the user will see pairs of (approximation agent\\u2019s generated response at step i, target agent\\u2019s generated response at step i) sequentially. This design aims to reduce user confusion and enhance transparency, ensuring a clear and coherent interaction experience.**\", \"another_question_worth_asking_is_what_content_generated_by_each_agent_we_should_provide_to_the_user\": \"**should we display only the final result for each step or the entire thinking process?** In the former case, there would be a longer wait for the step to be presented; in the latter, the volume of text generated could be overwhelming. This decision also influences whether we are able to and whether we should expect immediate user interruptions. In the current system design, we choose to present only the final decided step, which reduces the cognitive load on the user. But further exploration on how much information we should provide to the users and how to design the corresponding user interface can be and should be discussed.\\n\\n> ***Q7: What considerations should be explored for scaling the speculative planning framework in resource-constrained environments, especially where concurrent instances of T may lead to bottlenecks?***\\n\\nThank you for raising this important question. **If I understand correctly, you're asking how we can scale the speculative planning framework in resource-constrained environments when multiple users are requesting the agent simultaneously or in very close time**.\\nTo address this challenge, several considerations should be explored:\\n\\n(1) **Overall Latency**: We need to ensure that the total waiting time for all users is minimized. Efficiently distributing resources can help reduce latency and improve the overall user experience.\\n\\n(2) **Fairness**: It's important to prevent scenarios where some users experience excessively long wait times due to optimizations aimed at minimizing total latency. We should strive for a balanced approach that avoids disproportionately disadvantaging any user request.\\n\\n(3) **Cost Management**: Balancing the latency and cost, though in situations with limited concurrent instances of T, the cost is also upperbounded as we cannot have large k for most requests.\\n\\n(4) **Request Prioritization**: Recognizing that some tasks are more urgent and require lower latency, we should implement a priority system. Urgent requests can be given higher priority, while less time-sensitive tasks can be scheduled accordingly.\\n\\nBy carefully considering these factors, we can develop strategies to effectively scale the speculative planning framework, ensuring it remains efficient and fair even in resource-constrained settings.\"}", "{\"metareview\": \"The paper focuses on proposing a new method for designing an agent system as well as a user interface together - placing user interactions in the center. As agents begin to automate more and more tasks, such methods will become very relevant. All reviewers are also unanimous in their agreement about the merits of the paper.\", \"additional_comments_on_reviewer_discussion\": \"There was reasonable discussion during the rebuttal phase and a good faith attempt to answer all the reviewers' concerns by the authors. SCq5 increased their score and many of the other concerns by others were seemingly addressed.\"}", "{\"title\": \"Rebuttal for Reviewer 2 for Weakness 1, 2(a) & 2(b)\", \"comment\": \"We would like to extend our sincere gratitude for the invaluable time and effort you've dedicated to reviewing our manuscript and for providing us with detailed feedback.\", \"below_are_our_replies_to_the_concerns_you_raised\": \"---\\n\\n> ***W1: The method applies to synchronous Human-AI interaction where a human is in the loop and waiting for agent response in real time.***\\n\\nThank you for raising this important point. Indeed, our method is designed for synchronous human-AI interaction, where a human is actively involved and waiting for real-time responses from the agent **because latency matters the most in real time scenarios.** But we acknowledge that there are different applications of agents, including asynchronous interactions where a human may not be involved in real-time. In the limitations section of our paper, we have briefly mentioned this constraint (Appendix G. Limitations of the Current User Interface). **We will definitely emphasize this distinction more clearly in the revised manuscript to better position our work.**\\n\\n> ***W 2(a): When approximation agent and target agent can (often) disagree, it can lead to user confusion and lack of transparency on what is going on on UI.***\\n\\nThank you for highlighting this important aspect. **To address this, our user interface design, as described on page 6, incorporates a rescheduling mechanism that carefully manages when and which outputs from both the approximation and target agents are presented to the user.**\\n\\nSpecifically, the user interface only displays the approximation agent\\u2019s response based on a confirmed action trajectory. For instance, if the approximation agent quickly generates two sequential actions before the target agent confirms the first one, only the first action generated by the approximation agent will be shown. The system will then wait for the target agent's result on the first step before presenting the next message, which is the target agent\\u2019s result on the first step. If the target agent\\u2019s result aligns with the approximation agent\\u2019s result, the second step computed by the approximation agent will be presented, as it is based on a correct action trajectory.\\n\\nConsequently, the user will see pairs of (approximation agent\\u2019s generated response at step i, target agent\\u2019s generated response at step i) sequentially. This design aims to reduce user confusion and enhance transparency, ensuring a clear and coherent interaction experience.\\n\\n> ***W2 (b): The window for interruption is basically the time for target agent to execute, which is not user-friendly or \\\"user-centric\\\".***\\n\\nThank you for raising this critical point. **We would first like to direct your attention to Appendix G. Limitations of the Current User Interface, where we discuss the limitations and potential future directions of the user interface.**\\n\\n**First of all, we want to emphasize that the current design can be very useful**. For example, in scenarios such as multi-agent discussions, where the target agent may take several minutes to complete a single step. This approach was motivated by our observations of systems like AutoGen, which prompts the user after one round of multi-agent discussion and waits for user input. In many cases, users may lack the patience for multi-round discussions, especially when the action or step under discussion is trivial. This observation motivated us to design a system where users can actively interrupt and interact with the process.\\n\\nSecondly, a crucial design/research question related here is what and how much information to present to the user: **what information should we display to users, only the final result for each step or the entire thinking process?** In the former case, there would be a longer wait for the step to be presented; in the latter, the volume of text generated could be overwhelming. **This decision also influences whether we expect immediate user interruptions or implement potentially a roll-back mechanism to change what has been planned due to long reading time in user side**. The current system design presents only the final decided step, which reduces the cognitive load on the user. However, a roll-back mechanism would be ideal for scenarios where more information is presented and could also enhance the current presentation mode by offering users more flexibility and the option to be less attentive. Combining immediate user interruptions with a roll-back mechanism could indeed be a valuable future direction. We will consider this enhancement in our ongoing work.\"}", "{\"title\": \"Summary of Rebuttal\", \"comment\": \"Dear reviewers, AC, and SAC,\\n\\nWe sincerely thank the reviewers for dedicating their time to our paper and providing such insightful comments. We have summarized the main aspects of the discussion during the rebuttal period into 6 aspects presented below:\\n\\n> ***1. How to select an appropriate approximation agent (reviewer SYAX, SCq5)?***\\n\\nGiven user preferences or requirements on acceptable ranges of cost increase and time reduction, we can compute all possible configurations (including accuracy and speed) of the approximation agent for each value of k that meet these criteria. The detailed computation and selection process can be found in our response https://openreview.net/forum?id=BwR8t91yqh&noteId=5v5YExOc78. Discussion over advantages of the wide range of choices of approximation agent configurations can be found in https://openreview.net/forum?id=BwR8t91yqh&noteId=6IZ7SdHSwz.\\n\\n> ***2. How do we choose k in the system (reviewer SCq5)?***\\n\\nSimilarly, based on preferences or requirements for acceptable cost increases and time reduction proportions, we can determine all possible values of k for each configuration of the approximation agent that comply with these requirements. The computation and selection process is detailed in the same response in https://openreview.net/forum?id=BwR8t91yqh&noteId=5v5YExOc78 \\n\\n> ***3. How much cost is required for the system and possible ideas to also reduce cost in the system (reviewer SYAX, SCq5)?***\\n\\nWe have added a cost increase analysis to the revised paper as requested by Reviewer SCq5. Potential ideas to reduce costs, such as model cascading and offline or online model training for the approximation agent, are discussed in our response in https://openreview.net/forum?id=BwR8t91yqh&noteId=6IZ7SdHSwz\\n\\n> ***4. Additional experiments using a smaller and larger model from a particular model family with the same prompting approach (reviewer SYAX)?***\\n\\nExperimental results using Llama 3.1 models of varying sizes (8B, 70B, 405B) with Chain-of-Thought and ReAct generation are presented in our response Q1 in https://openreview.net/forum?id=BwR8t91yqh&noteId=TVIDUMdXDi. All additional experiments show noticeable time reductions in agent planning, supporting our main motivation.\\n\\n> ***5. How does the design of the UI design/scheduling mechanism enables more user-friendly user interface (reviewer y5dF, SCq5)?***\\n\\nOur UI design encapsulates the complex concurrent processes of speculative planning into a sequential and easily understandable format. This approach allows users to perceive the computation latency associated with the target agent and how computation time is saved by the approximation agent. It also represents a technical contribution in building the first system that allows synchronous real-time human-AI interaction, which is a scenario where latency matters the most. Relevant answers can be found in https://openreview.net/forum?id=BwR8t91yqh&noteId=hYa22oPSPP and https://openreview.net/forum?id=BwR8t91yqh&noteId=GruMFkfDbG\\n\\n> ***6. The range of applicable domains of the acceleration method (reviewer y5dF)***\\n\\nAs mentioned in Appendix G, \\\"Limitations and Future Directions. Spectre,\\\" the current implementation is not suitable for high-stakes domains. We also emphasized it in Q2 of https://openreview.net/forum?id=BwR8t91yqh&noteId=33Tu4b7I4V.\"}", "{\"title\": \"Reply for Questions 1 - 4\", \"comment\": \"> ***Q1: In Figure 1, please briefly state the reason for using Venmo as an example.***\\n\\n**Venmo was chosen as an example simply because it is in widespread use**, making it a relatable and accessible illustration for readers. However, if it is not very professional to use a specific App name in the paper, **we could certainly change the example to a \\\"bank app\\\" or another universally recognized application to maintain a more professional and generic tone**. We can make this adjustment in the final version of the paper.\\n\\n> ***Q2: Line 100: Statements like \\\"This strategy [[potentially]] reduces the time a target agent\\\" suggests either that this gain is not universal or that the authors have doubts about their results.***\\n\\nSorry for the confusion. We use the word \\u201cpotentially\\u201d because **in the worst case scenario** where the approximation makes a mistake for every single step, **the running time of the system will be exactly the time of normal agent planning**. Therefore, though very unlikely, it is possible that there is no time reduction at all. This is why we use the word potentially, just to cover this very low possibility.\\n\\n> ***Q3 (a): Under what circumstances should each evaluation metric (e.g., accuracy vs. speed vs. cost) be prioritised, and how might these impact the framework's configuration, e.g., in terms of k?***\\n\\n**For accuracy**: the prioritization of accuracy does not affect k, as this method guarantees the performance to be the same as normal agent planning. Thus no matter what k is, accuracy will not be sacrificed (unless some lossy matching method is used)\\n\\n**For speed**: as presented in **Appendix C4. Figure 10 (a)**, higher k always leads to quicker system. And thus if speed is prioritized, we should use very large k.\\n\\n**For cost**: as presented in **Appendix C4. Figure 10 (b)**, a lower k always leads to a lower cost. And thus if cost is prioritized, we should use very small k.\", \"more_detailed_analysis_on_how_to_choose_k_can_be_found_in_https\": \"//openreview.net/forum?id=BwR8t91yqh&noteId=5v5YExOc78\\n\\n> ***Q3 (b): do the authors believe there is a case for solely focusing on cost and not efficiency and vice versa?***\\n\\nYes, we believe there are cases where it is appropriate to focus solely on either cost or efficiency, depending on the specific context and user needs. \\n\\n**Prioritizing Efficiency**: In scenarios such as customer service chatbots, quick response times are crucial for user satisfaction and engagement. Users expect immediate assistance, and delays can lead to frustration or loss of trust. In such cases, it makes sense to prioritize efficiency even if it results in higher costs. \\n\\n**Prioritizing Cost**: Conversely, there are many situations where time efficiency is less critical, and minimizing cost becomes the primary concern. For example, when designing a travel plan for a user in a non-urgent context, the user can attend to other tasks while waiting for the plan to be generated.\\n\\n> ***Q4: Do the authors have insight into reducing costs to make the approach more appealing?***\\n\\nThank you for raising this important question. There are several notable approaches in the literature that address cost reduction in agent planning:\\n\\n**EcoAssistant**: The paper titled \\\"EcoAssistant: Using LLM Assistant More Affordably and Accurately\\\" employs a model cascade to reduce planning costs by initially using a smaller, more efficient model. While this approach saves cost, it does so at the expense of increased latency.\\n\\n**System-1.x**: The paper \\\"System-1.x: Learning to Balance Fast and Slow Planning with Language Models\\\" fine-tunes a controller, a System-1 Planner, and a System-2 Planner based on a single LLM. This system uses the controller to decide whether to use System-1 or System-2 for planning specific steps, thereby reducing costs. However, this method requires model training on specific tasks.\\n\\n**Online Speculative Decoding**: The paper is about speculative decoding but I think the idea can be adopted to speculative planning. Online Speculative Decoding proposes tuning the approximation LLM based on feedback from the larger target LLM. This approach enhances the accuracy of the approximation model and reduces latency through online learning. A similar idea could be applied in our context to improve accuracy of the approximation agent and thus reducing the cost (fewer wasted processes) as well as latency.\\n\\nCurrently, I do not have any other brand-new ideas for reducing both latency and cost beyond what these models have discussed. And in general, I think model tuning/training may be unavoidable to reduce both latency and cost. However, this is at the top of my to-do/to-think list, and I am actively exploring potential solutions.\\n\\nThank you for your insightful feedback. We will continue to investigate ways to optimize both cost and latency in our future work.\"}", "{\"comment\": \"Thank you for the detailed rebuttal and that answers many of the concerns I have now read the comments from other reviewers and also the rebuttal to those comments.\\n\\nThe only minor comment to the rebuttal i had was about the statement \\\"there are two domains where the present implementation may fall short: High-stakes domains..\\\". I am glad authors acknowledge that when task may include subtasks that are irreversible, this may not be a good idea. However, in the intro as part of the motivation, the authors state \\\"Particularly in scenarios where complex tasks are delegated to LLM-based agents, often involving high stakes and complex decision-making processes, users may not anxiously wait for the agent to respond all at once, but rather expect the agent to provide timely feedback\\\". This gives the impression that speculative planning is particularly suited for high stakes domain. Anyway, since this is rather minor rephrasing comment , i am happy to raise my score as the rebuttal clarifies the questions i had.\"}", "{\"title\": \"Rebuttal for Reviewer 1 for question 1, 2 & 3\", \"comment\": \"> ***Q1: The evaluation could be improved by comparing a smaller and larger model from a particular model family, i.e. 8b vs. 70b llama using the same approach.***\\n\\nThank you for the suggestion, and it is indeed an interesting design. **But notice that Setting 4 is exactly using the setting you are mentioning, where we utilize GPT-3.5 and GPT-4 with direct generation.** To enrich experiments for this design, here we present the extra experiments (on 50 random datapoints for now) of OpenAGI **using the Llama-3.1 models** with FP8 quantization with Together AI API. The generation strategies are CoT and ReAct. \\n\\nFor each table, we present normal agent planning with 70b model, speculative planning with 8b + 70b model, normal agent planning with 405b model, speculative planning with 70b + 405b model, normal agent planning with 405b model, and speculative planning with 8b + 405b model.\\n\\n**Result using CoT prompting strategy**\\n\\n| settings | 70b | 8b + 70b | 405b | 70b + 405b | 405b | 8b + 405b | |\\n| -------- | ----- | -------- | ------ | ---------- | ------ | --------- | --- |\\n| TT | 37.68 | 31.33 | 32.030 | 26.25 | 32.030 | 32.47 | |\\n| ST | 3.85 | 3.05 | 5.71 | 5.08 | 5.71 | 5.59 | |\\n| TO | 1880 | 2684.67 | 926.5 | 1662.17 | 926.5 | 1328.19 | |\\n| SO | 182.4 | 331.12 | 162.82 | 339.08 | 162.82 | 196.0 | |\\n| MC | 1 | 4.00 | 1 | 3.91 | 1 | 4.52 | |\\n| cost | 0.007 | 0.011 | 0.014 | 0.018 | 0.014 | 0.021 | |\\n\\n**Result using ReAct prompting strategy**\\n\\n| settings | 70b | 8b + 70b | 405b | 70b + 405b | 405b | 8b + 405b | |\\n| -------- | ------- | -------- | ------- | ---------- | ------- | --------- | --- |\\n| TT | 46.03 | 33.12 | 49.34 | 41.08 | 49.34 | 45.49 | |\\n| ST | 5.98 | 3.88 | 7.693 | 5.87 | 7.693 | 7.134 | |\\n| TO | 1074.57 | 1511.33 | 1343.19 | 1840.84 | 1343.19 | 1690.62 | |\\n| SO | 149.71 | 177.09 | 195.59 | 284.18 | 195.59 | 241.0 | |\\n| MC | 1 | 4.05 | 1 | 4.10 | 1 | 4.67 | |\\n| cost | 0.013 | 0.018 | 0.028 | 0.038 | 0.028 | 0.033 | |\\n\\n> ***Q 2: Are there more efficient approaches to validating the approximate models plan without comparing directly to the target models plan?***\\n\\nThank you for raising this insightful question. If I understand correctly, you are asking how to efficiently evaluate whether the approximation agent's planned step is correct without directly comparing it to the target agent's plan.\\n\\nCurrently, similar to speculative decoding, our approach relies on the target agent for verification, which means we have to wait until the target agent completes the corresponding step. However, there are other potential methods to consider. For instance, we could use an **external evaluator**, which could be the target agent itself if it can perform the evaluation more quickly than computing the step. Alternatively, we could **run the proposed step in a simulated environment**, assuming this process is faster than waiting for the target agent to finish computing.\\n\\n**While these methods could improve efficiency, they may not guarantee that the final output of such speculative planning system, using both approximation and target agents, would be equivalent to that of normal agent planning using only the target agent**. Currently, by comparing with the target agent's plan, we can ensure output equivalence. Other methods can certainly be explored for better efficiency, but at present, I cannot think of a validation method that both guarantees output equivalence and is faster than comparing with the target model's plan. (Notice that it is not necessarily a bad thing not being able to maintain output equivalence, as it could potentially increase the target agent performance )\\n\\nWe appreciate your feedback and will continue to investigate more efficient validation approaches in our future work.\\n\\n> ***Q3: If the user interrupts the plan, do you envision that they edit the agents trajectory directly? or provide feedback and have the agent regenerate the step?***\\n\\nThank you for raising this important question. **Currently, we envision users editing the agent trajectories directly**. While we have implemented an additional feature to accept user feedback, the latency can vary significantly based on the nature of the user input and how the model interprets this feedback. Given that the current focus of our paper is on developing a system that decreases latency with a strict upper bound, we have not included experimental results for the feedback-taking scenario. We appreciate your insight and will consider exploring this aspect in future work to provide a more comprehensive understanding of user interaction dynamics.\\n\\n----\\n\\nWe hope our answers clear your questions. Thank you again!\"}", "{\"title\": \"Further discussion and potential score increase?\", \"comment\": \"Dear Reviewer y5dF,\\n\\nWe sincerely appreciate the time and effort you've invested in reviewing our paper. Your insightful comments and thoughtful feedback are invaluable to us.\\n\\nWe hope that our responses have addressed your questions and clarified any ambiguities. Please do not hesitate to reach out if you have any further questions!\\n\\n**We would be truly grateful if you would consider our clarifications and kindly reevaluate your score.**\\n\\nThank you once again for your consideration.\\n\\nBest regards, \\nthe authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"I thank the authors for their responses. Thank you for agreeing to incorporate a discussion on the approach's costs and providing insights into how these can be addressed. However, I'm not convinced with your discussion on W2 and Q5. We may \\\"believe\\\" that our designed system/UI \\\"will\\\" work,\\\" but that remains a hypothesis unless demonstrated. As such, I leave my score as it is.\"}", "{\"title\": \"Thank you for your reply\", \"comment\": \"Dear reviewer SCq5,\\n\\nThank you very much for your valuable feedback!\\n\\n**We acknowledge the importance of a comprehensive user study to fully understand the design needs for effective user interactions in agentic systems, whereas our work aims to provide a technical framework that supports various design choices.** By integrating the user as a critical component in the algorithm and addressing the core rescheduling challenge, we hope to lower the technical barrier and encourage more ML and HCI researchers to explore this user-centric LLM agent system direction.\\n\\n**Our proposed UI-level algorithm aims to present an intuitive and streamlined planning process.** The planning steps remain ordered to the user along with clear verification signals at the frontend, and all concurrent, backtracking, and pruning processes are handled in the backend. Therefore this approach is specifically designed to reduce the potential cognitive burden and minimize the need for extensive special training on end users.\\n\\n**While our contribution is technical and some hypotheses remain to be tested via real-world user studies, we believe these efforts are still valuable for enabling researchers to conduct follow-up studies and focus on core HCI problems.**\\n\\nWe wanted to thank the reviewer again for these critical questions. We will revise the introduction to clarify the contributions of our work and enhance the discussion to better highlight how this framework can be incorporated into different UI designs.\\n\\n**In addition, I hope you find my replies to other questions such as the choice of k helpful :) We would be truly grateful if you would consider our clarifications and kindly reevaluate your score.**\\n\\nBest,\\nthe authors\"}", "{\"title\": \"Thank you for raising the score!\", \"comment\": \"Dear reviewer,\\n\\nThank you very much for your appreciation! \\n\\nThank you again for pointing out the phrasing issue and I will re-paraphrase the part. \\n\\nBest,\\nthe authors\"}", "{\"title\": \"Further discussion and potential score increase?\", \"comment\": \"Dear Reviewer SCq5,\\n\\nWe sincerely appreciate the time and effort you've invested in reviewing our paper. Your insightful comments and thoughtful feedback are invaluable to us.\\n\\nWe hope that our responses have addressed your questions and clarified any ambiguities. Please do not hesitate to reach out if you have any further questions!\\n\\n**We would be truly grateful if you would consider our clarifications and kindly reevaluate your score.**\\n\\nThank you once again for your consideration.\\n\\nBest regards, \\nthe authors\"}", "{\"summary\": \"The paper presents an interesting approach to reduce the potential latency of LLM-based systems and an approach where user and user interruptions are build into system design. The approach involves two different agents: an faster but more error-prone approximation agent and a slower but more accurate target agent. Depending on the accuracy of approximation agent, the speed of system can be significantly faster , and in the worst case, comparable to a non-speculative planning agent.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Overall, the architecture of using a faster (but potentially inaccurate) agent and a slower (but possibly more accurate) in order to reduce the latency of the system is an interesting idea and can potentially work well for certain tasks with careful UI considerations.\", \"weaknesses\": \"1. The method applies to synchronous Human-AI interaction where a human is in the loop and waiting for agent response in real time. An entirely different application of Agents is asynchronous interaction (where a human may not be in the loop in real-time, e.g. perform this action every day at this time ..). I would encourage the authors to highlight this in order to position their work better.\\n\\n2. I vehemently agree with the statement \\\"A fully automated \\u201cblackbox\\u201d agent system with prolonged response delays is suboptimal for user experience\\\". However, I believe the paper uses the term \\\"user-centric\\\" a bit loosely, I am not quite sure how the current method is more \\\"user-centric\\\". There are many places of contention:\\n\\na) Given approximation agent and target agent can (often) disagree, it means initial responses may be overwritten or multiple intermediate responses shown to user. Without careful UI design, this can lead to user confusion and lack of transparency on what is going on. \\nb) User in the loop workflow where user can actively take part in the decision making is absolutely necessary. However, in the current system, the window for interruption is basically the time for target agent to execute. The user can potentially take time to read the response from approximation agent, process it, decide if it needs to be revised manually and then potentially frame the revision in text or voice, all of which can take some amount of time. I am failing to see how this is \\\"user-centric\\\" when user is hurried to interrupt within a window of short time.\\nc) I did not fully understand how this mechanism can work when the action execution can be anything of consequence in the user interaction (e.g. sending an email or request for money to split bill on a mobile pay application). This in practice would mean, if the approximation agent is wrong, the system would perform tons of incorrect actions that can not be reversed. I would like a more detailed limitations of where this approach can work and where it should not be applied. \\n \\nMore generally, there are established ways to study human factors aspects of Human-AI interaction, employing carefully controlled user studies. The current \\\"theoretical approach\\\" to human factors is an interesting first step but leave a lot of questions unanswered that could potentially make or break the interaction.\", \"questions\": \"1. Could a theoretical focus on system latency lead to newer issues in the human-AI interaction?\\n2. Is this approach suitable for all tasks? A more detailed discussion would be useful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal for Reviewer 1 for weakness 1 & 2\", \"comment\": \"We would like to extend our sincere gratitude for the invaluable time and effort you've dedicated to reviewing our manuscript and for providing us with detailed feedback.\", \"below_are_our_reply_to_the_two_main_concerns_you_raised\": \"---\\n\\n> ***W1: The algorithm trades off time with efficiency. Are there ideas to also improve efficiency?***\\n\\nThank you for raising this important question. There are several notable approaches in the literature that address cost reduction in agent planning:\\n\\n**EcoAssistant**: The paper titled \\\"EcoAssistant: Using LLM Assistant More Affordably and Accurately\\\" employs a model cascade to reduce planning costs by initially using a smaller, more efficient model. While this approach saves cost, it does so at the expense of increased latency.\\n\\n**System-1.x**: The paper \\\"System-1.x: Learning to Balance Fast and Slow Planning with Language Models\\\" fine-tunes a controller, a System-1 Planner, and a System-2 Planner based on a single LLM. This system uses the controller to decide whether to use System-1 or System-2 for planning specific steps, thereby reducing costs. However, this method requires model training on specific tasks.\\n\\n**Online Speculative Decoding**: The paper is about speculative decoding but I think the idea can be adopted to speculative planning. Online Speculative Decoding proposes tuning the approximation LLM based on feedback from the larger target LLM. This approach enhances the accuracy of the approximation model and reduces latency through online learning. A similar idea could be applied in our context to improve accuracy of the approximation agent and thus reducing the cost (fewer wasted processes) as well as latency.\\n\\n**Currently, I do not have any other brand-new ideas for reducing both latency and cost beyond what these models have discussed. And in general, I think model training may be a necessary step if we want to reduce both latency and cost.** However, this is at the top of my to-do/to-think list, and I am actively exploring potential solutions.\\n\\nThank you for your insightful feedback. We will continue to investigate ways to optimize both cost and latency in our future work.\\n\\n---\\n> ***W2: The selection of an approximate model and the target model may be difficult to satisfy, and increase system complexity in practical applications.***\\n\\nThank you for raising this important consideration. The choice of the approximation agent is indeed crucial for the system's effectiveness. \\n\\n**Speculative planning is compatible with a very broad range of approximation agents for a given target agent** (unlike speculative decoding, where the approximation models must be from the same group to ensure they share the same vocabulary for sampling): For a given target agent, we can select agents with the same backbone LLM but different prompting styles or available tools. Alternatively, we can choose agents with weaker backbone LLMs using the same or different prompting methods, or employ more complex prompting methods to match performance. Additionally, we can opt for multi-agent systems with various ensembles of agents. \\n\\n**This wide range of possibilities is more positive than the downside**. This wide range of possibilities means we have extensive opportunities to optimize the system and find the most suitable agent or multi-agent configuration. While selecting the \\\"right\\\" approximation agent from such a broad spectrum may seem challenging, it also presents a significant advantage: greater flexibility and a wider range of possibilities for optimization. This flexibility allows us to tailor the system more precisely to various needs and scenarios.\\nComparing with the target agent\\u2019s plan on some agentic benchmark is definitely the most reliable method to determine which approximation agent to use.\\n\\n**Regarding the issue of increasing system complexity, this can be mitigated through careful design and packaging.** If well-designed and packaged, such a system should be easily leveraged, similar to speculative decoding, which has been integrated into many LLM serving platforms and packages such as vLLM. Currently, the system can be used off-the-shelf, thus it does not significantly enhance complexity in applications from either the application development side or the user side.\"}", "{\"title\": \"Reply for last question\", \"comment\": \"> ***Q: Since the approach's success largely depends on the accuracy of A, what do the results and analysis of k (in the appendix) indicate in terms of any minimum thresholds for A's accuracy?***\\n\\nThank you for your insightful question regarding the minimum thresholds for A's (the approximation agent's) accuracy in relation to k. Appendix C4 provides a comprehensive overview of how the speed of A, the accuracy of A, and the value of k interact.\\nBut here, To offer a more concrete method for selecting hyperparameters, we introduce a user-preference parameter $\\\\alpha$, where $0\\\\leq\\\\alpha\\\\leq\\\\inf$. This This parameter represents the trade-off between time saved and cost increased, both measured in percentages. We define $\\\\alpha$ as \\n$$\\\\alpha = \\\\frac{\\\\text{time of NAP}/\\\\text{time of SP}}{\\\\text{cost of NAP}/(\\\\text{cost of target agent using NAP} + \\\\text{cost of approximation agent using NAP})}$$\\nwhere NAP is the abbreviation of normal agent planning and SP is the abbreviation of speculative planning. We choose the cost baseline to be cost of approximation agent ($A$) using NAP + cost of target agent ($T$) using NAP, as this represents the minimum possible number of tokens the system will generate. If a user cannot afford this token generation cost, they should not use such a system.\\n\\nA higher $\\\\alpha$ indicates that we require time increase percentage to be >= the cost increase percentage; a lower $\\\\alpha$ means we can bear very high cost or we can bear very large latency.\\n\\nTo determine the configuration of the speculative planning system, we consider two primary questions:\\n\\n1. **Given specific configurations of $T$ and $A$, and a certain preference value $\\\\alpha$, how do we determine the possible values of k?**\\n\\n2. **Given a specific $T$ and a fixed k (possibly determined by available resources), and a certain preference value $\\\\alpha$, how do we determine the configurations of $A$?**\\n\\nTo answer the two questions, we adopt the simulation setting we use in Appendix C4: \\n\\n(1) $T$ (Target Agent): Takes 8 seconds per step, generates 30 tokens per step. \\n(2) Plan: Consists of 10 steps.\\n\\n---\\n\\n**For question 1:** Let's explore for different configurations of $A$ (accuracy, time per step), what k is required given a specific threshold of $\\\\alpha$.\\n\\n1. $A$ with (0.5 accuracy, 2 seconds per step):\\n\\nIf $\\\\alpha = 1.2$, possible k values to obtain an $\\\\alpha$ value higher than 1.2 are 2, 3, 8. This suggests we can either proceed slowly with limited k to minimize token waste or opt for a higher k to save time while controlling waste. A multi-objective trade-off is required here.\\n\\n2. $A$ with (0.6 accuracy, 2 seconds per step):\\n\\n If $\\\\alpha = 1.2$, possible k values can range from 2 to 10\\n\\n3. $A$ with (0.7 accuracy, 2 seconds per step):\\n\\nIf $\\\\alpha = 1.2$, possible k values can range from 2 to 10\\n\\nIf $\\\\alpha = 1.5$, possible k values are 4, 5, 8, 9, 10\\n\\n4. $A$ with (0.9 accuracy, 2 seconds per step):\\n\\nIf $\\\\alpha = 1.2$, possible k values can range from 2 to 10\\n\\nIf $\\\\alpha = 1.5$, possible k values can range from 2 to 10\\n\\nIf $\\\\alpha = 2$, possible k values can range from 5 to 10\\n\\n**Basically, higher accuracy in $A$ allows for greater flexibility in choosing k while meeting the user's preference between time and cost.**\\n\\n---\\n\\n**For question 2**, Given a specific $T$ and a fixed k (possibly determined by available resources), and a certain preference value $\\\\alpha$, how do we determine the configurations of $A$?\\n\\nFor example, with k = 5:\\n\\n1. If $\\\\alpha = 1.2$, then the choices for $A$ are relatively broad. Possible configurations are\\n\\n(0.5, 1), (0.6, 1), (0.6, 2), (0.6, 3)\\n\\n(0.7, 1), (0.7, 2), (0.7, 3), (0.7, 4), (0.7, 5)\\n\\n(0.8, 1), \\u2026,(1.0, 5)\", \"minimum_required_accuracy\": \"0.8\\n\\n**So in general, as $\\\\alpha$ increases, the minimum required accuracy of $A$ also increases, and the acceptable time per step decreases.**\\n\\n\\nIn our simulation experiments, we have not yet exhaustively explored all possible combinations of $A$ and k, as there are approximately 1,000 potential combinations. For each combination, we conduct 10 experiments to account for randomness. We are actively working on completing these simulations to provide clear and detailed guidance for users. Our goal is to help users configure the system effectively based on their time constraints and resource availability.\"}", "{\"title\": \"Rebuttal for Reviewer 2 for Weakness 2(c), Question 1 & 2\", \"comment\": \"> ***W 2(c): I did not fully understand how this mechanism can work when the action execution can be anything of consequence in the user interaction (e.g. sending an email or request for money to split bill on a mobile pay application).***\\n\\nThank you for raising this important concern. **We would first like to direct your attention to Appendix G. Spectre**, where we have discussed the issue of speculative execution vulnerabilities, commonly referred to as Spectre. \\n\\n**Spectre (security vulnerability) is well-documented in the literature (McIlroy et al., 2019; Kocher et al., 2020) and pertains to vulnerabilities involved in speculative execution**, a hardware feature that enhances processor performance by predicting and executing future instructions. Speculative execution and its associated risks have a long history in operating systems.\\n\\nGiven the potential for incorrect and irreversible actions, such as sending emails or financial transactions, **we agree that the current form of Interactive Speculative Planning should be applied cautiously**. Specifically, we recommend constraining its use to non-high-stakes areas where the consequences of incorrect actions are minimal. **In Appendix G, we also outline potential solutions for safer Interactive Speculative Planning, aiming to broaden its applicability while ensuring security and reliability.**\\n\\nWe will continue to explore and address these limitations in our ongoing work.\\n\\n> ***Q1: Could a theoretical focus on system latency lead to newer issues in the human-AI interaction?***\\n\\nA theoretical focus on system latency can indeed introduce new considerations and potential issues in human-AI interaction. Below are 2 examples:\\n\\n(1) the relationship between user experience and the amount of text presented is crucial. If the presented text is very long and dense, decreasing latency may not necessarily improve user experience. Conversely, if the text is concise and well-summarized, reducing latency could ideally enhance user satisfaction by providing timely and digestible information.\\n\\n(2) designing an interactive system with a theoretically guaranteed latency could be highly beneficial. Such a system would reassure users that, regardless of their interaction style, the time the system spends on a task is guaranteed to be below a certain threshold. This predictability can enhance user trust and satisfaction, knowing that the system will respond within a reliable timeframe.\\n\\n> ***Q2: Is this approach suitable for all tasks? A more detailed discussion would be useful.***\\n\\nThank you for raising this important question. Currently, there are two domains where the present implementation may fall short:\\n\\n(1) **High-stakes domains**: As mentioned in Appendix G, \\\"Limitations and Future Directions. Spectre,\\\" the current implementation is not suitable for high-stakes domains.\\n\\n(2) **Agentic tasks where the order of steps is not crucial**: These tasks may also be unsuitable, as there could be many false negatives in rejecting the approximation agent's plan if the steps are simply in a different (but not incorrect) order compared to the target agent.\\n\\nWe appreciate your feedback and will move the relevant content from the appendix to the main body of the paper to provide a more detailed discussion on the suitability of this approach for various tasks.\\n\\n---\\n\\nWe really appreciate your time and consideration in reviewing our paper. We will really appreciate it if you could re-assess our work, especially if our explanations have clarified any previous ambiguities. We will incorporate further motivation in our revised manuscript.\"}", "{\"title\": \"Rebuttal for Weaknesses 1 & 2\", \"comment\": \"We would like to extend our sincere gratitude for the invaluable time and effort you've dedicated to reviewing our manuscript and for providing us with detailed feedback.\", \"below_are_our_replies_to_the_concerns_you_raised\": \"---\\n\\n> ***W1: One downside, not fully addressed or discussed in the paper, is the increased cost.***\\n\\nThank you for pointing out this important aspect. We appreciate your feedback and agree that a balanced discussion of both the advantages and limitations of our approach is essential.\\n\\n**Firstly, we will definitely add more information and analysis on the increased cost in Sections 4.1 and 4.2**. In our main results table in section 4.1 and 4.2, we present the time saved but also the increased cost in terms of total tokens generated, the number of concurrent API calls, and the overall change in cost. We will add an analysis of the increasing cost. Here is an example analysis that we can/will present in the updated version of the paper:\\n```\\nIn the first and second settings, the computation time decreased by approximately 20% and 30%, respectively, at the cost of a ~70% increase in monetary cost. In the third setting, the computation time decreased by about 40%, at the cost of a 37% increase in monetary cost. This is because, in the third setting, multi-agent discussion is itself a very expensive prompting method. In the last setting, where we use GPT-3.5 as the approximation agent, the computation time decreased by about 20% at almost no additional cost due to the very cheap nature of GPT-3.5. Thus, we can see that the increase in cost can be mitigated if the prompting method is simpler or if the approximation agent\\u2019s backbone model is cheaper.\\n```\\n**Secondly, we would like to direct your attention to Appendix C2, where we have extensively analyzed the increased cost.** In this section, we cover:\\n\\n(1) The best and worst cases of total tokens generated in the system, which directly correspond to the cost.\\n\\n(2) The best and worst cases of concurrent API calls required in the system.\\n\\nWe will move some of this detailed analysis back to the main body of the paper to ensure that the discussion is comprehensive and balanced. Thank you again for your valuable feedback.\\n\\n\\n> ***W2: Although the title suggests a \\\"co-design\\\" with user interface considerations, the paper provides limited insights into UI and user interactions. The paper is technical.***\\n\\nThank you very much for raising this important point. We want to elaborate the standpoint of the paper a bit more. \\n\\n**Our paper is one of the first to incorporate active user engagement into agentic framework design**, and we consider latency to be a key aspect that users care about from an HCI perspective. As an initial effort in this direction, our focus has been on establishing the technical foundation to enable and integrate user engagement into the system.\\n\\nBy building a system that accelerates agentic task planning and can handle active user interactions, **we are laying the groundwork for broader design considerations**. This includes potential user studies on **how much users want to accelerate the process** (which we can control by setting the approximation agent and the parameter k), **how much information users want to see** (the whole generation process, the final result, or something in between), and **other possible implementation of other technical user-interaction mechanisms** such as a roll-back mechanism. This technical groundwork is essential for paving the way for more user-centric agent framework designs in the future. We will work on enhancing our discussion to better highlight the HCI elements and provide more insights into how the technical contributions support and inform the UI and user interaction design. \\n\\nThank you again for your valuable feedback.\"}", "{\"summary\": \"Describes a speculative planning algorithm for LLM agents that assumes an approximate model and a target model. It is assumed that the target model is more capable but slower than the approximate model. Planning is performed by the approximate model until it is deemed to deviate from the target model, at which time the approximate model is corrected. Human interaction is also addressed via a rescheduling algorithm and the ability to interrupt and modify the plan.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The problem is relevant, agents are slow and we should try to make them more responsive.\\n\\nThe approach is novel afaik.\\n\\nThe algorithm itself is clearly described.\", \"weaknesses\": \"The algorithm trades off time with efficiency. Are there ideas to also improve efficiency?\\n\\nThe selection of an approximate model and the target model may be difficult to satisfy, and increase system complexity in practical applications.\", \"questions\": \"The evaluation could be improved by comparing a smaller and larger model from a particular model family, i.e. 8b vs. 70b llama using the same approach.\\n\\nAre there more efficient approaches to validating the approximate models plan without comparing directly to the target models plan?\\n\\nIf the user interrupts the plan, do you envision that they edit the agents trajectory directly? or provide feedback and have the agent regenerate the step?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BwQUo5RVun
improve weakly supervised visual grounding by learning where to focus on
[ "Zhi Xu", "Yun Fu" ]
Visual grounding is a crucial task for connecting visual and language descriptions by identifying target objects based on language entities. However, fully supervised methods require extensive annotations, which can be challenging and time-consuming to obtain. Weakly supervised visual grounding, which only relies on image-sentence association without object-level annotations, offers a promising solution. Previous approaches have mainly focused on finding the relationship between detected candidates, without considering improving object localization. In this work, we propose a novel method that leverages Grad-CAM to help the model identify precise objects. Specifically, we introduce a CAM encoder that exploits Grad-CAM information and a new loss function, attention mining loss, to guide the Grad-CAM feature to focus on the entire object. We also use an architecture which combines CNN and transformer, and a multi-modality fusion module to aggregate visual features, language features and CAM features. Our proposed approach achieves state-of-the-art results on several datasets, demonstrating its effectiveness in different scenes. Ablation studies further confirm the benefits of our architecture.
[ "weakly supervised learning", "visual grounding", "grad-cam", "vision and language" ]
Reject
https://openreview.net/pdf?id=BwQUo5RVun
https://openreview.net/forum?id=BwQUo5RVun
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rgr7V8ild8", "eqdRhPNLSC", "dC8I3ZNoNN", "LUjcvXjOIX", "8ayPGW3rok", "7lq6EhGerY", "7F3SniB12p", "2InSWdIPgE" ], "note_type": [ "official_comment", "decision", "official_review", "official_review", "official_review", "meta_review", "official_comment", "official_review" ], "note_created": [ 1732671542336, 1737524174608, 1730256664168, 1730550856081, 1730730849486, 1734331993459, 1732635265115, 1730623606015 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12232/Reviewer_K9e7" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12232/Reviewer_2Uie" ], [ "ICLR.cc/2025/Conference/Submission12232/Reviewer_K9e7" ], [ "ICLR.cc/2025/Conference/Submission12232/Reviewer_CwTX" ], [ "ICLR.cc/2025/Conference/Submission12232/Area_Chair_hbvV" ], [ "ICLR.cc/2025/Conference/Submission12232/Reviewer_2Uie" ], [ "ICLR.cc/2025/Conference/Submission12232/Reviewer_vmg3" ] ], "structured_content_str": [ "{\"comment\": \"The authors have not replied my comments, so I maintain my rating as \\\"Reject\\\".\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper focuses on the weakly-supervised learning setting in the Visual Grounding task. The authors propose to use Grad-CAM to explain the model's attention/focus and provide a loss to supervise the behavior of Grad-CAM. They also combine CNN and Transformer structures to provide vision features in different semantic levels.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. Experiment has covered comprehensive datasets.\\n2. Achieve partial state-of-the-art performance on current benchmarks.\\n3. The writing of the idea is easy to understand.\", \"weaknesses\": \"1. **Motivation for the task is questionable**: In line 40 to line 42, the authors claim that Visual Grounding (VG) needs region-level annotation which is time consuming. Instead, weakly-supervised VG only needs the image-text pair, so it is easier to collect. However, the text in VG is still region-level annotation, like \\\"a man in a white shirt next to the door\\\". In human labeling scenario, I don't see that the additional bounding box (bbox) annotation occupies the main annotation effort given that the human already put an effort to come up with the detailed region-level sentence. **Therefore, this bbox annotation cost in VG needs a strict comparison to see if it matters or not.** In automatic labeling scenario, we can still use SOTA object detector to come up with bbox, and then use set-of-mask or any other region-level captioning technique to provide pseudo labels. Although there might be error for this automatic labeling pipeline, for weakly-supervised setting, you still rely on region-level captioning technique to provide the region-level sentence. **So how do we compare this two pipelines' error?** If they have similar annotation error, I don't see the necessity to focus on weakly-supervised setting.\\n\\n\\n2. **Overclaim about the shortage of previous structures**: In line 74 to lin 76, the authors claim that transformer-based method is less efficient in training than their combination of CNN + transformer. **However, current transformer-based methods do have CNN structure**, e.g. TransVG, QRNet, VLTVG, VG-LAW, SegVG. They all adopt the encoder of DETR which is a ResNet + Transformer Layer. Therefore, I don't see the main technical difference in the structure. Moreover, **the paper didn't provide results to support the claim of more efficient like training cost**.\\n\\n\\n3. **Similar Grad-CAM method already exists**: <Improving Visual Grounding by Encouraging Consistent Gradient-based Explanations> **has already explore the use of Grad-CAM as a supervision signal**, therefore, the novelty of the main contribution of this paper is limited.\\n\\n4. **Technical concern about Grad-CAM**: Grad-CAM is not a state-of-the-art post-hoc explanation method given that the authors have involve transformer structure. Related work like <Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer> has a much better explanation result. **Therefore, it might be meaningless to supervise Grad-CAM if it is not the actual way of model's decision making.**\\n\\n5. **Writing needs to improve**: The figure in page 5 is unclear. The tables in page 7 are unclear.\", \"questions\": \"1. How is the performance when using a different post-hoc explanation method to supervise?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work focuses on weakly supervised visual grounding task, where only image and language inputs are available without precise object-level annotations. There are three main components In the proposed framework, including an efficient visual encoder that combining the CNN and transformer structures, a CAM encoder that extracting useful information from Grad-CAM features, and a multi-modality fusion module that aggregating visual, language and CAM features. Moreover, an attention mining loss is introduced to make the Grad-CAM features to highlight the whole object instead of focusing on its local parts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.The architecture of each module is clearly stated, it is not hard to understand their designs.\\n\\n2.The experiments are comprehensive, which are conducted on five datasets including RefCOCO, RefCOCO+, RefCOCOg, ReferItGame, and Flickr30K. The ablation studies also demonstrate the effectiveness of the proposed modules.\", \"weaknesses\": \"1. In the related work section, some recently proposed approaches are not appropriately mentioned, such VLTVG [1], QRNet [2], LUNA [3], VG-LAW[4] for fully supervised visual grounding, and CPL [5], WSVG[6], AMC [7], enhanced X-VLM [8] for weakly supervised visual grounding.\\n\\n2. The authors claim that \\u201cno previous attempts have been made to integrate Grad-CAM with existing weakly supervised visual grounding methods\\u201d, however, Grad-CAM has already been used in [7] and [8], could the authors clarify how their use of Grad-CAM differs from or improves upon the approaches in [7] and [8]?\\n\\n3. Since the introduced attention mining loss was firstly proposed in GAIN [9] method, could the authors clarify their loss differs from that in GAIN or the modifications or improvements they've made? This will help to clarify their contribution.\\n\\n4. For the experiments, the comparison methods seem to be a bit outdated. All of them are published before 2023, which may not fully demonstrate the superiority and effectiveness of the proposed method. Could the authors compare the proposed method with recent works [5-8], especially with [7] and [8] that also utilize the Grad-CAM technique.\\n\\n5. It may be better to visualize the extracted CAM features to show their effectiveness on highlighting the target object areas.\\n\\n6. The texts in Figure 4 are too small and hard to read.\\n\\n[1] Yang L, Xu Y, Yuan C, et al. Improving visual grounding with visual-linguistic verification and iterative reasoning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 9499-9508.\\n\\n[2] Ye J, Tian J, Yan M, et al. Shifting more attention to visual backbone: Query-modulated refinement networks for end-to-end visual grounding[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 15502-15512.\\n\\n[3] Liang Y, Yang Z, Tang Y, et al. LUNA: Language as Continuing Anchors for Referring Expression Comprehension[C]//Proceedings of the 31st ACM International Conference on Multimedia. 2023: 5174-5184.\\n\\n[4] Su W, Miao P, Dou H, et al. Language adaptive weight generation for multi-task visual grounding[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 10857-10866.\\n\\n[5] Liu Y, Zhang J, Chen Q, et al. Confidence-aware Pseudo-label Learning for Weakly Supervised Visual Grounding[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 2828-2838.\\n\\n[6] Zhang R, Wang C, Liu C L. Cycle-consistent weakly supervised visual grounding with individual and contextual representations[J]. IEEE Transactions on Image Processing, 2023.\\n\\n[7] Yang Z, Kafle K, Dernoncourt F, et al. Improving visual grounding by encouraging consistent gradient-based explanations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 19165-19174.\\n\\n[8] Pham V Q, Mishima N. Focusing on Targets for Improving Weakly Supervised Visual Grounding[C]//ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023: 1-5.\\n\\n[9] Li K, Wu Z, Peng K C, et al. Tell me where to look: Guided attention inference network[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 9215-9223.\", \"questions\": \"1. For the visual encoder, I am wondering why combining the transformer and CNN architectures can \\u201creduce computation costs and accelerate the training processing\\u201d?\\n\\n2. For the extraction of CAM features, the authors first need to identify all the nouns in the input referring text. I am wondering that how to avoid or mitigate the effect of non-target object nouns on the extracted CAM features. For example, for the input text \\u201ca white dog lying on the sofa\\u201d, since \\u201csofa\\u201d is not a part of the target object \\u201cdog\\u201d, the image regions relevant with \\u201csofa\\u201d should not be focused by CAM features, so how to reduce its effects?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the author propose a weakly supervised visual grounding architecture that combines the transformer and CNN architectures. Observing that Grad-CAM is useful in weakly supervised training, they design a CAM encoder that utilizes the Grad-CAM to provide better object localization when predicting the final bounding box.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The structure of the paper is complete\"], \"weaknesses\": [\"Q1. The first innovation claimed in this paper is the use of Grad-CAM to enhance weakly supervised grounding ability. However, Grad-CAM has been proposed as an attention tool for many years and has been widely utilized in various fields. This paper argues that the utilization of Grad-CAM cannot be considered as an innovation.\", \"Q2. The second innovation claimed in this paper is the incorporation of multi-layer features and transformer networks. However, these practices are already widely used in existing grounding systems such as Pseudo-q, CLIP-VG, etc.\", \"Q3. It is worth saying that Pseudo-q is an unsupervised method. However, it is treated as weakly supervised method in this paper.\", \"Q4. The work presented in this paper is simple and direct, and its overall innovation is appears relatively low and is far from to meet the standard of top-tier conference paper, especially ICLR. Therefore, it would be advisable for the authors to consider submitting their work to lower-level journals or conferences.\"], \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No Ethics Concerns\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a method to enhance weakly supervised visual grounding by integrating Grad-CAM and a new attention mining loss into the model architecture. The proposed method achieves state-of-the-art performance on several benchmark datasets. All reviewers are concerned about the limited novelty and unclear clarifications in this paper. They recommend rejection and the authors did not participate in the rebuttal. So the final decision is reject.\", \"additional_comments_on_reviewer_discussion\": \"No response from the authors.\"}", "{\"title\": \"Maintain my reject rate\", \"comment\": \"Since the authors haven't replied my concerns, I finally decide to maintain my reject rate.\"}", "{\"summary\": \"This paper proposes a method to enhance weakly supervised visual grounding by integrating Grad-CAM and a new attention mining loss into the model architecture. The authors introduce a \\\"CAM encoder\\\" that uses Grad-CAM heatmaps to help the model focus on the right objects. An attention mining loss is designed to guide the Grad-CAM features to cover the whole object. The proposed architecture combines CNNs and transformers in the visual encoder and includes a multi-modality fusion module to aggregate visual features, language features, and CAM features. The method achieves state-of-the-art performance on several benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper achieves state-of-the-art performance on 4 out of 5 evaluated datasets\", \"The ablation studies demonstrate the effectiveness of the CAM encoder and attention mining loss.\"], \"weaknesses\": \"1. The definition and role of the attention mining loss are not sufficiently explained. While the loss is mentioned (Lam = Sc(I\\u2217)), the paper does not clearly define how it is computed or how it integrates into the training process. Referring to previous work (GAIN) without a detailed explanation may leave readers unclear about this key component. Also it is surprising to not have an exact equation on that loss as it is claimed to be a novel contribution of the paper.\\n\\n\\n2. The term \\\"Grad-CAM features\\\" is used repeatedly throughout the paper, but Grad-CAM typically produces a single heatmap representing the importance of image regions. Referring to this heatmap as \\\"features\\\" is confusing.\\n\\n3. The Grad-CAM heatmaps are obtained from a ResNet trained on ImageNet, which by design limits the method's ability to detect classes not present in ImageNet. This restricts the method's applicability to a broader range of objects, despite the language model's capability to handle diverse text queries.\\n\\n4. The whole subsection 3.2 on the \\u201cLanguage Encoder\\u201d could have been summarised in \\u201cwe use a pretrained BERT encoder\\u201d. I don\\u2019t understand why the authors detail the tokenization and word embedding when it is exactly the same as BERT.\\n\\n5. In figure 2 it seems that the \\u201cVisual Transformer\\u201d is used before the CNN architecture but then in Figure 3 the CNN architecture is used before. Also, the naming is weird as the \\u201cVisual Transformer\\u201d is actually just of self-attention operation (it is also not clear if this module used several attention heads). The authors also say, \\u201cWe use a self-attention module to capture the local information of the given features\\u201d, but in general, self-attention modules are used to capture global information. Hence, the authors need to add more details on that.\\n\\n6. The paper mentions the use of a \\\"self-taught regression loss\\\" and a \\\"phrase reconstruction loss\\\" following RIF, but does not provide explanations or formulations for these losses. Including details or referring to supplementary materials would enhance clarity.\\n\\n7. Given that the method is centered around using Grad-CAM, it would be beneficial to include visualizations of the Grad-CAM heatmaps. This would help readers understand how the attention mining loss influences the attention maps and contributes to improved localization.\", \"miscellaneous\": \"Tables 1 & 2 are quite small, which can be acceptable if the authors lack space, but the paper is only 9 pages while the page limit is 10.\", \"questions\": \"1. Can the authors provide a more detailed formula and explanation of the attention mining loss? Specifically, how is Lam computed, and how does it influence the training process to ensure the Grad-CAM features focus on the whole object?\\n\\n2. How does using a ResNet trained on ImageNet to generate Grad-CAM heatmaps affect the generalizability of the method? Can the model handle objects not included in the ImageNet classes? If not, how might this limitation be addressed?\\n\\n3. There appears to be a discrepancy between the figures and the description of the visual encoder. Could the authors clarify the sequence of the CNN and transformer modules in the visual encoder? Additionally, how does the self-attention module capture local information, and does it use multiple attention heads?\\n\\n4. Can you include visualizations of the Grad-CAM heatmaps before and after applying the attention mining loss? This would help illustrate how the loss function improves the focus of the attention maps on the entire object.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BwGeIhGPgn
Evaluating Information Gathering Abilities of Large Language Models with QuestBench
[ "Belinda Z. Li", "Been Kim", "Zi Wang" ]
Large language models (LLMs) have mastered a wide range of reasoning tasks, with an underlying assumption that the tasks are well-specified for LLMs to reach solutions. In reality, queries and instructions to LLMs often contain incomplete or underspecified information. Therefore, LLMs need to be able to actively ac-quire missing information by asking clarifying questions, ideally seeking the minimally sufficient piece of information. To assess whether LLMs possess this ability, we construct QUESTBENCH, a set of underspecified reasoning tasks that can be solved by asking at most a single question. We frame the tasks as constraint satisfaction problems with missing variable assignments, where the exact model response cannot be determined unless certain variables’ values are acquired. This framework specifically targets tasks where uncertainty stems from missing information, rather than semantic ambiguity in language. QUESTBENCH includes (1) Logic-Q: Logical reasoning tasks where one proposition is missing, (2) Planning-Q: PDDL planning problems where the initial state is partially observed, and (3) GSM-Q: Grade school math problems where one variable assignment is missing. Each task presents multiple choices of possible questions, only one of which is correct. We evaluate Gemini and GPT-4o models and find that they achieve 20 – 30% accuracy in both zero-shot and few-shot settings. When evaluating GPT-4-o1on a subset of our data, we find that it is only 41 – 44% accurate, despite using state-of-the-art inference-time reasoning techniques. When investigating characteristics of QuestBench, we find that LLMs struggle with tasks that are computationally expensive for traditional search-based CSP solvers. Our analyses reveal a negative correlation between LLM accuracy and solver runtime complexity, suggesting that LLMs may share similar limitations to CSP solvers
[ "information gathering", "question asking", "language model", "evaluation", "benchmarks" ]
Reject
https://openreview.net/pdf?id=BwGeIhGPgn
https://openreview.net/forum?id=BwGeIhGPgn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rsWos8vjkr", "q9yDvY5dmn", "kc9XBfQBSn", "XWdbhFChMc", "WtNh600qiB", "VvUwj5Mz73", "UXsOYMIyRN", "Rfpg8cuEmh", "DxtfuM06a4", "5EBAUBBe68", "4dGBdRvbUR", "2364J7qBuN", "0cgWMtT9n7" ], "note_type": [ "official_review", "official_review", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730716748478, 1730689750684, 1737524171755, 1732658120823, 1733263275756, 1734823992305, 1732658213957, 1732658075620, 1730440858893, 1733257933406, 1732658231775, 1729432005310, 1732658010632 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12191/Reviewer_Hjhv" ], [ "ICLR.cc/2025/Conference/Submission12191/Reviewer_shNW" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12191/Authors" ], [ "ICLR.cc/2025/Conference/Submission12191/Area_Chair_z5Ej" ], [ "ICLR.cc/2025/Conference/Submission12191/Area_Chair_z5Ej" ], [ "ICLR.cc/2025/Conference/Submission12191/Authors" ], [ "ICLR.cc/2025/Conference/Submission12191/Authors" ], [ "ICLR.cc/2025/Conference/Submission12191/Reviewer_Rppr" ], [ "ICLR.cc/2025/Conference/Submission12191/Authors" ], [ "ICLR.cc/2025/Conference/Submission12191/Authors" ], [ "ICLR.cc/2025/Conference/Submission12191/Reviewer_ai7d" ], [ "ICLR.cc/2025/Conference/Submission12191/Authors" ] ], "structured_content_str": [ "{\"summary\": \"Most practical problems require humans to operate in uncertain settings where uncertainty might arise from either ambiguity or underspecification of the problem. It then becomes imperative to obtain relevant information by asking clarification questions. While this is a common knowledge in human conversations, with the advent of LLMs, it is essential to evaluate their capacity to reason about uncertainty and actively acquire the necessary information for completing tasks. Existing benchmarks are limited in their scope and do not cover complex logic, planing, and math reasoning. The subjective nature of the problem where the information to acquire might vary based on individuals and population pose further challenges.\\nTowards that the authors present a collection of question asking benchmarks, that they call QuestBench, that cover logic, planning and grade school math. They specifically focus on problems that can be formulated as constrain specification problems (CSP) and, within that, limit the scope to problems that are underspecified. Their benchmark - QuestBench - leverages existing datasets - SimpleLogic, PyperPlan, and GSM-Plus and converts them into 1-sufficient CSP where the size of the smallest sufficient set (of variables required to solve the problem) is 1. They then evaluate some of the proprietary models on these benchmarks and find that while these models perform well in identifying missing information in GSM problems, they struggle with logic and planning problems. They also correlate this performance with different measures of search complexity and hypothesize that the LLMs might possess search skills similar to breadth-first search or brute-force approaches, which become less effective as the search space expands.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The ability of LLMs to ask clarification problems is important to advance their state-of-the-art and application to solving complex practical problems that are riddled with uncertainty. The authors attempt to address an important problem and take a step forward by presenting a quantitative benchmark to evaluate LLMs.\\nThrough evaluation of top models, they highlight their inability in identifying missing information, especially in complex planning and logic reasoning tasks while doing relatively better on math. I also appreciate that the authors go a step further to correlate this performance with different measures of complexity. Their hypothesis around the limited search capability of current LLMs, derived from this correlation, seems intuitive and leads to a call for action for the LLM research community.\", \"weaknesses\": \"While the motivation is strong, by limiting the scope to 1-sufficient CSPs, I feel the scope is significantly limited. At least it is unclear how much of the practical problem space does this cover.\\nI also found some gaps in writing that make it difficult to follow. For instance, the challenges around identifying necessary information to acquire and the lack of ground truth would have been better understood with some examples. The notion of constraint satisfaction problems is loosely defined. It is unclear what class of practical problems might fall in this category vs not.\\nThe construction details of the datasets is omitted from the main paper and is delegated to the appendix. At least a brief description is expected in the main paper.\\nOverall, I think I am concerned about the limited scope of the benchmarks.\", \"questions\": \"I would encourage the authors to consider a larger scope beyond 1-sufficient CSPs or clearly articulate why this is a significant enough problem space.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces QUESTBENCH, a new benchmark designed to evaluate the ability of large language models (LLMs) to handle underspecified tasks by asking clarifying questions. QUESTBENCH frames these tasks as constraint satisfaction problems with missing information, focusing on scenarios where uncertainty arises due to missing variables rather than semantic ambiguity.\", \"the_benchmark_consists_of_three_categories\": \"1.\\tLogic-Q: Tasks involving logical reasoning where a missing proposition's value is needed.\\n2.\\tPlanning-Q: Planning problems with undefined initial states requiring additional observations to reach a goal.\\n3.\\tGSM-Q: Grade school math problems lacking critical information for a solution.\\n\\nThe paper evaluates models like Gemini Pro 1.5, GPT-4o, and GPT-4-o1, finding significant room for improvement in their information-gathering abilities, with accuracy ranging from 20% to 44%.\", \"key_contributions_include\": \"1.\\tA framework for evaluating under-specification in LLMs.\\n2.\\tThe creation of the QUESTBENCH benchmark for assessing LLMs' information-gathering skills.\\n3.\\tAnalysis of LLM performance on QUESTBENCH, highlighting areas needing enhancement.\\n\\nOverall, QUESTBENCH provides a structured approach to study how LLMs manage missing information and clarify underspecified instructions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. **Problem Formulation**: The paper introduces a novel benchmark, QUESTBENCH, designed to specifically assess the ability of LLMs to ask clarifying questions for underspecified tasks. This focus on missing information in constraint satisfaction problems distinguishes it from previous benchmarks. The use of constraint satisfaction as a method to frame underspecified tasks is an innovative approach, providing a structured way to evaluate models.\\n2. **Advancing LLM Capabilities**: By highlighting the current limitations of LLMs in handling underspecified tasks, the paper opens avenues for future research and development in enhancing model interactivity and problem-solving under uncertainty.\\t\\n3. **Well-Defined Categories**: The definition of transforming Logic-Q, Planning-Q, and GSM-Q categories into a CSP problem is clear and logical, making the benchmark easy to understand.\", \"weaknesses\": [\"1. **Limited Scope of Evaluation**:\", \"Details: While the paper evaluates several state-of-the-art models, it may benefit from testing a broader range of LLM families, including smaller or emerging models, to provide a more comprehensive understanding.\", \"Suggestions: Expand the evaluation to include open-source models like LLaMA, Qwen, Mistral, or close-sourced model like Claude 3.5\", \"2. **The evaluation metrics are not clearly present**: In Table 2, the author mentions Language model accuracies at predicting the right question. But how do you define if a generated question is accurate?\", \"3. **Lack of insights to handle underspecified problems**:\", \"First, the authors have shown numerous works that aim to actively seek clarification through questions, as noted in Lines 47-48. However, the author does not evaluate these methods, merely presenting the scores of some basic prompting strategies. Therefore, it is hard to say whether the low performance on QUESTBENCH is caused by inappropriate prompting.\", \"Second, this paper hardly shows the way to overcome the underspecified task. Though the major goal is this paper is for evaluation purpose, offering some insights into overcoming such a challenge could enhance its contribution\"], \"questions\": \"1. In Table 2, how do you define if a generated question is accurate?\\n2. Can you show the results of the methods mentioned in Lines 47-48?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"> Limited Scope of Evaluation: Expand to other open-source LLMs\\n\\nWe appreciate this suggestion. However, please note results on SOTA LLMs represent the upper bound on the information gathering ability of generic LLMs. Finding this upper bound helps us to answer our question on whether LLMs can actually ask the right question for information gathering, which is the goal of this work. Since this will be an open-sourced benchmark, people will be able to run any model on our benchmark if they are interested.\\nThat said, we will obtain more results in the new version of the paper.\\n\\n> The evaluation metrics are not clearly present\\n\\nPlease see the general reply.\\n\\n> Lack of insights to handle underspecified problems: 1. \\u2026numerous works that aim to actively seek clarification through questions\\u2026 does not evaluate these methods\\n\\nPlease note that most of those works are designed for subjective or knowledge-based tasks such as persona-tasks \\u201cWhat is a good pasta recipe\\u201d [1], human preference eliciting tasks [2], knowledge-based ambiguity tasks \\u201cWho won the US open?\\u201d [3, 4] or knowledge-based medical diagnosis problems [5]. Their methods either do not apply to our tasks or require significant modifications (such as simulating users or designing rewards) to be applied to the underspecified reasoning tasks in our benchmark. We are not aware of existing methods that solve 1-sufficient CSPs defined in our work. \\n\\n> Lack of insights to handle underspecified problems: 2. \\u2026shows the way to overcome the underspecified task. Though the major goal is this paper is for evaluation purpose, offering some insights into overcoming such a challenge could enhance its contribution.\\n\\nPlease see the general reply \\u201cfuture work on method\\u201d.\\n\\n> In Table 2, how do you define if a generated question is accurate?\\n\\nPlease see the general reply \\u201cHow to compute accuracy\\u201d.\\n\\n> Can you show the results of the methods mentioned in Lines 47-48?\\n\\nPlease see the reply to \\u201cLack of insights to handle underspecified problems: 1. \\u2026numerous works\\u201d above.\\n\\n[1] Chinmaya Andukuri, Jan-Philipp Franken, Tobias Gerstenberg, and Noah D Goodman. STaR-GATE: Teaching language models to ask clarifying questions. In Conference on Language Modeling, 2024.\\n\\n[2] Belinda Z. Li, Alex Tamkin, Noah Goodman, and Jacob Andreas. Eliciting human preferences with language models, 2023. URL https://arxiv.org/abs/2310.11589.\\n\\n[3] Michael JQ Zhang and Eunsol Choi. Clarify when necessary: Resolving ambiguity through interaction with LMs. arXiv:2311.09469 [cs.CL], 2023.\\n\\n[4] Jing-Cheng Pang, Heng-Bo Fan, Pengyuan Wang, Jia-Hao Xiao, Nan Tang, Si-Hang Yang, Chengxing Jia, Sheng-Jun Huang, and Yang Yu. Empowering language models with active inquiry for deeper understanding. arXiv preprint arXiv:2402.03719, 2024.\\n\\n[5] Zhiyuan Hu, Chumin Liu, Xidong Feng, Yilun Zhao, See-Kiong Ng, Anh Tuan Luu, Junxian He, Pang Wei Koh, and Bryan Hooi. Uncertainty of thoughts: Uncertainty-aware planning enhances information seeking in large language models. arXiv:2402.03271 [cs.CL], 2024.\"}", "{\"comment\": \"Thanks for calling my attention to this. I'll work with reviewers on it.\"}", "{\"metareview\": \"(a) Scientific claims and findings:\\nThe paper constructs QUESTBENCH, a benchmark to evaluate LLMs' capability to resolve underspecified tasks through information gathering. It demonstrates that even advanced LLMs perform poorly, especially on computationally intensive tasks, and identifies potential limitations in LLMs' reasoning mechanisms.\\n\\n(b) Strengths:\\n\\n- Novel framing of information-gathering as CSPs, distinguishing underspecification from semantic ambiguity.\\n- Comprehensive benchmark spanning logic, planning, and math reasoning tasks.\\n\\n(c) Weaknesses:\\n\\n- Limited scope to 1-sufficient CSPs reduces applicability to real-world problems.\\n- Absence of natural language tasks, narrowing practical relevance.\\n- Insufficient exploration of methods to address failures and improve model capabilities.\\n- Evaluation metrics require further clarification, though addressed during the rebuttal.\\n\\n(d) Decision: reject\\nWhile the benchmark and framing of the problem are valuable contributions, the limited scope and practical relevance, along with the lack of actionable insights for improvement, reduce the paper's impact.\", \"additional_comments_on_reviewer_discussion\": [\"Reviewers raised concerns about the limited scope of the benchmark, lack of practical tasks, and absence of insights into improving LLMs' performance. There are also questions about evaluation metrics and failure analysis were also noted.\", \"Authors' responses:\", \"Clarified the rationale for focusing on 1-sufficient CSPs and its foundational role in solving multi-sufficient CSPs.\", \"Added examples and explanations for accuracy evaluation.\", \"Acknowledged limitations in scope but defended the focus on controlled experiments.\"], \"changes_made\": [\"Title was updated to emphasize \\u201cthe right question\\u201d for clarity.\", \"Additional examples and clarifications were incorporated into the manuscript.\", \"Authors raised concerns about the lack of engagement of reviewers in discussion. Reviewers responded after reminder but decided to keep their scores unchanged.\"]}", "{\"comment\": \"> The tasks in QuestBench are constructed to be solvable with only a single missing piece of information, which simplifies the challenges of real-world queries. This limited complexity limits the benchmark's applicability to real-world scenarios.\\n\\nPlease see the general reply \\u201cScope limited to 1-sufficient CSPs\\u201d and \\u201capplicability to real-world scenarios\\u201d.\\n\\n> lack of natural language tasks in QuestBench\\n\\nWe asked human annotators to convert a subset of GSM-Q back into word problems (that are missing a premise), and evaluate how well GPT-4o is at finding the missing premise to ask about:\\n\\n* Original GSM-Q subset: 96.6% accurate\\n* Verbalized GSM8K: 89.5% accurate\", \"our_instructions_to_human_annotators_for_converting_gsm_q_into_word_problems_can_be_found_below\": \"> You will be presented with a series of math problems. These math problems are written in words and translated to equations. Your task is to first validate whether the translation is correct given the information present in the problem. If so, you will then be prompted to answer questions for each equation. \\n> 1. Is the above list of variables, equations, and the goal equivalent to the original math problem written in words?\\n> 2. Please solve for the \\u201cGoal\\u201d in the above list of variables and equations. Is your answer the same as [orig_answer]?:\\n> 3. Try to rewrite the problem to remove all parts of the problem that states any of the above equation(s). Please make sure the problem is still coherent English (e.g. do not simply delete the section you copied above without fixing any grammatical errors). Please also make sure to remove the entire premise, not just replacing numbers with \\u201cfew\\u201d or \\u201csome\\u201d. If there is no way to remove the equation (e.g. because it wasn\\u2019t mentioned in the original problem), please leave the text box empty and check off \\u201ccannot remove\\u201d.\\n> 4. Given the above rewritten problem, is the answer to the question: [] the same as [orig answer], [] unclear, [] different from [orig_answer]\\n\\nFurthermore, as noted in Section 3, our work is much more focused on *underspecification* rather than *semantic ambiguity*. While we agree that every-day, natural-language tasks are rich with ambiguous instructions due to semantic ambiguity, we are focused specifically on cases when a piece of information is missing. Thus, we use CSPs as a jumping off point in order to disentangle these two types of ambiguity.\\n\\n\\n> evaluate LLMs in more practical, real-world contexts where queries are often open-ended or conversational\\u2026 ambiguous and underspecified instructions encountered in everyday language use\\n\\nOur goal is exactly NOT to evaluate queries that are open-ended since the ground truth is unclear and often subjective, which makes the evaluation unreliable. We specifically design the tasks to be 1-sufficient CSPs, so that there exists one correct question to be asked for each task. These are explained in the introduction (Section 1) and comparisons to prior work (Section 2), and our effort to disentangle ambiguity and underspecification (Section 3). To further clarify that we are not evaluating the generic information gathering skills of LLMs, we are changing the paper title to \\u201cQuestBench: Can LLMs ask the right question to acquire information in reasoning tasks?\\u201d.\\n\\n> In L415, why use four-shot settings.\\n\\nWe wanted to benchmark standard setups of LLMs, and few-shot is one of the typical setup. We chose to use 4-shot because all tasks with 4-shot examples fit the context length of the models we evaluated. Other setups can be used but we believe the ones we evaluated are representative of the performance.\"}", "{\"comment\": \"> limiting the scope to 1-sufficient CSPs\\u2026 concerned about the limited scope of the benchmarks\\n\\nPlease see the general reply \\u201cScope limited to 1-sufficient CSPs\\u201d.\\n\\n\\n> unclear how much of the practical problem space does this cover\\n\\nPlease see the general reply \\u201capplicability to real-world scenarios\\u201d.\\n\\n> Gaps in writing that make it difficult to follow. Add examples for the challenges around identifying necessary information to acquire and the lack of ground truth \\n\\nThank you for this suggestion. Please note that we have illustrative examples in Figure 1 and the beginning of section 3 for the challenge of identifying necessary information. \\n\\nThe lack of ground truth is due to the well-known rater disagreement problem for subjective tasks [2,3,4,5,6,7]. For generic information gathering tasks, examples for the challenges around the lack of ground truth include\\n1. The user gives an underspecified query: \\u201cgive me recommendations for dinner.\\u201d ChatGPT currently presents a list of dishes and asks the question \\u201cWhat are you in the mood for?\\u201d One person might find the question helpful, but another person might find it too generic and unhelpful.\\n2. The user gives an underspecified query: \\u201cplan a trip to Japan.\\u201d ChatGPT currently presents a long list of steps, and asks \\u201cWould you like a more tailored plan or help booking tickets?\\u201d One might find it helpful since they want ChatGPT to book tickets, but another might find this question not eliciting the most important information, like time of travel.\\n\\nThis can happen for many underspecified queries that involve subjectivity in evaluation. We have now included an example in the introduction and cited the above papers.\\n\\nPlease let us know if you have suggestions on other examples to include.\\n\\n> The notion of constraint satisfaction problems is loosely defined. It is unclear what class of practical problems might fall in this category vs not.\\n\\nPlease see our responses above for \\u201climiting the scope to 1-sufficient CSPs\\u2026\\u201d and \\u201c unclear how much of the practical problem space does this cover\\u201d.\\n\\n\\n> The construction details of the datasets is omitted from the main paper and is delegated to the appendix. At least a brief description is expected in the main paper.\\n\\nThanks for this suggestion, we originally moved the construction details to the appendix to save space and not overshadow the main paper with unnecessary details. We preserved what we believe to be the main dataset description in Section 4.\\n\\n> I would encourage the authors to consider a larger scope beyond 1-sufficient CSPs or clearly articulate why this is a significant enough problem space.\\n\\nPlease see the general reply \\u201cScope limited to 1-sufficient CSPs\\u201d.\\n\\n\\n[1] https://en.wikipedia.org/wiki/Constraint_satisfaction_problem\\n\\n[2] Aroyo, Lora, and Chris Welty. \\\"Truth is a lie: Crowd truth and the seven myths of human annotation.\\\" AI Magazine 36.1 (2015): 15-24.\\n\\n[3] Aroyo, Lora, et al. \\\"Dices dataset: Diversity in conversational ai evaluation for safety.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[4] Davani, Aida Mostafazadeh, Mark D\\u00edaz, and Vinodkumar Prabhakaran. \\\"Dealing with disagreements: Looking beyond the majority vote in subjective annotations.\\\" Transactions of the Association for Computational Linguistics 10 (2022): 92-110.\\n\\n[5] Basile, Valerio, et al. \\\"We need to consider disagreement in evaluation.\\\" Proceedings of the 1st workshop on benchmarking: past, present and future. Association for Computational Linguistics, 2021.\\n\\n[6] Sandri, Marta, et al. \\\"Why don\\u2019t you do it right? analysing annotators\\u2019 disagreement in subjective tasks.\\\" Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023.\\n\\n[7] Wan, Ruyuan, Jaehyung Kim, and Dongyeop Kang. \\\"Everyone\\u2019s voice matters: Quantifying annotation disagreement using demographic information.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 12. 2023.\"}", "{\"summary\": \"The paper investigates the capabilities of LLMs to ask clarifying questions when dealing with underspecified tasks. These tasks often lack sufficient information to generate an accurate response without additional clarification. To evaluate this, the authors introduce QuestBench, a benchmark of three tasks (Logic-Q, Planning-Q, and GSM-Q) that require one clarifying question to resolve underspecified queries.\\n\\nThe study tested several SOTA models on QuestBench and found performance to be suboptimal. The findings reveal a gap in LLMs' ability to gather necessary information, particularly for complex logic and planning tasks.\\n\\nThe authors contribute by presenting a constraint satisfaction framework focused on evaluating underspecification and perform analyses on model performance correlations with reasoning mechanisms. Their results suggest LLMs struggle with larger solution spaces and deeper search requirements, showing potential limitations in the models' reasoning capabilities and highlighting areas for future improvements in question-asking and information-gathering skills in LLMs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a benchmark specifically aimed at evaluating the information-gathering abilities of LLMs when faced with underspecified tasks. The benchmark is well-designed and covers three types of tasks, including logic reasoning, planning, and math problems.\\n\\n2. The paper provides various evaluations and insights into the types of reasoning mechanisms LLMs may currently lack, which could be useful in future improvements of the models.\", \"weaknesses\": \"1. The tasks in QuestBench are constructed to be solvable with only a single missing piece of information, which simplifies the challenges of real-world queries. This limited complexity limits the benchmark's applicability to real-world scenarios.\\n\\n2. A potential weakness of the paper is the lack of natural language tasks in QuestBench. The current benchmark primarily includes structured tasks, such as logic reasoning, planning, and math problems, which lack the variability and richness of natural language interactions. This limits QuestBench\\u2019s ability to evaluate LLMs in more practical, real-world contexts where queries are often open-ended or conversational. Including natural language tasks would provide a more comprehensive assessment of LLMs\\u2019 information-gathering abilities, as these tasks better reflect the types of ambiguous and underspecified instructions encountered in everyday language use.\", \"questions\": \"1. In L415, why use four-shot settings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Concerns about the lack of discussion and unfair evaluation\", \"comment\": \"We appreciate the reviewers' time and effort in providing the initial feedback. We submitted a detailed rebuttal addressing all of the points raised by the reviewers but it is unfortunate that **no reviewer participated in the discussion or acknowledged reading the rebuttal/paper revision**.\\n\\nWhile we understand that reviewing for a conference of this scale can be time-consuming and demanding, the absence of any discussion following our rebuttal raises concerns about the fairness and the quality of the evaluation.\\n\\nMoreover, we worry that our inclusion of formal mathematical formulation prevented some reviewers from understanding the focus and significant contribution of our work, including being the first to rigorously define underspecification in reasoning tasks and reliably evaluate information-gathering abilities of frontier models.\\n\\nWe know it is unlikely, but we hope the AC and reviewers can read the rebuttal and the revised paper, let us know if any additional clarification is needed and fairly recalibrate the ratings. If you finish reading this message, thank you for your attention.\"}", "{\"comment\": \"> valuable findings and insights\\n\\nWe analyzed the correlation between search complexity and accuracy. We believe that this is a valuable finding and insight.\\n\\n> why the existing model lacks the ability of \\\"information gathering\\\"? Is it data, algorithm, or other factors?\\n\\nWe believe that there are multiple dimensions of reasons including\\n \\n- the lack of training or fine-tuning data for information gathering and question-asking tasks, especially for tasks that require asking a single best correct question;\\n- the lack of planning and complex reasoning capabilities, as demonstrated in [1].\\n\\n\\n> In what direction should we work further to improve the model's ability in this aspect?\\n\\nPlease see the general reply \\u201cfuture work on method\\u201d.\\n\\n\\n> For the failure cases of the models, we can add some statistical analysis to summarize the types and causes of failures.\\n\\nIn our statistical analyses (Section 6), we found that the increase of search complexity leads to a growth in the number of failure cases. The types and causes are shown quantitatively in the factors described in Section 6. \\n\\nCould you please clarify what other types and causes we can include?\\n\\n> Select a subset of failure cases, summarize the reasons for model failure, and analyze the reasons that led to the failure.\\n\\nWe have already done this kind of analysis in Section 6 and found the model failures are related to increase of search complexity. Please let us know if you have recommendations for other analyses.\\n\\n> Discuss ways to improve the model's information-gathering capability. If possible, it would be better to conduct experiments to verify the feasibility of the methods.\\n\\nPlease see the general reply \\u201cfuture work on method\\u201d. Unfortunately given the limited space of the paper, we found it difficult to include methods.\\n\\n> details missing in the evaluation process. Specifically, assessing the model's accuracy is a non-trivial task\\n\\nPlease see the general reply \\u201chow to compute accuracy\\u201d.\\n\\n> Could you please tell me how the accuracy was evaluated in this paper? Was it evaluated manually by humans or using an LLM? What were the specific evaluation criteria?\\n\\nPlease see the general reply \\u201chow to compute accuracy\\u201d. We specifically avoid the need for humans or LLMs to evaluate since they can be unreliable. The tasks are multi-choice problems which makes it very easy to evaluate.\\n\\n> Could you please share some insights on how to improve the information-gathering capability of the model based on the evaluation results?\\n\\nPlease see the general reply \\u201cfuture work on method\\u201d.\\n\\n[1] \\u200b\\u200bValmeekam, Karthik, et al. \\\"On the planning abilities of large language models-a critical investigation.\\\" Advances in Neural Information Processing Systems 36 (2023): 75993-76005.\"}", "{\"summary\": \"This paper focuses on the ability of large language models to actively request information from users when faced with semantically clear but underspecified questions. To evaluate this ability, the authors created QuestBench, a benchmark of underspecified tasks that can be solved by asking at most one question. This dataset includes three tasks:\\n\\n- Logic-Q: Logical reasoning tasks where one proposition is missing\\n- Planning-Q: PDDL planning problems where the initial state is underspecified\\n- GSM-Q: Grade school math problems where one variable assignment is missing\\n\\nThe GSM-Q task was manually annotated. The authors evaluated existing models such as GPT-4o, Gemini, and o1. They found that even o1, which has significantly better reasoning abilities, struggles to perform well on these tasks.\\nThis research highlights the challenges large language models face when dealing with underspecified questions and their ability to ask for clarification.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The information-gathering ability is important for large language models. The authors provide a dataset to evaluate this capability.\\n2. The definitions of key concepts and the methods for constructing the dataset are described very clearly.\\n3. The authors evaluated several advanced models (GPT-4o, Gemini, o1) and conducted some correlation analyses between search complexity and LLM accuracy.\", \"weaknesses\": [\"1. While some models were evaluated, there was a lack of valuable findings and insights. Specifically,\", \"What are the potential reasons why the existing model lacks the ability of \\\"information gathering\\\"? Is it data, algorithm, or other factors?\", \"In what direction should we work further to improve the model's ability in this aspect?\", \"For the failure cases of the models, we can add some statistical analysis to summarize the types and causes of failures.\", \"Thus, I suggest adding experiments about the following points that could be helpful:\", \"Select a subset of failure cases, summarize the reasons for model failure, and analyze the reasons that led to the failure.\", \"Discuss ways to improve the model's information-gathering capability. If possible, it would be better to conduct experiments to verify the feasibility of the methods.\", \"2. There are important details missing in the evaluation process. Specifically, assessing the model's accuracy is a non-trivial task. This is because the correct behavior is to request the missing information in the underspecific question. However, the authors don't seem to describe this point in the paper. Overall, the author needs to provide details regarding how to judge whether the question asked by the model is correct.\"], \"questions\": \"Could you please tell me how the accuracy was evaluated in this paper? Was it evaluated manually by humans or using an LLM? What were the specific evaluation criteria?\\n\\nCould you please share some insights on how to improve the information-gathering capability of the model based on the evaluation results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank all reviewers for their insightful feedback and recognizing our key strengths, including\\n\\n- the importance of evaluating the ability of LLMs to ask clarification questions [Hjhv, shNW, ai7d]; \\n- a novel CSP-based formulation [shNW, Rppr] and benchmark [shNW, Rppr, ai7d];\\n- analyses and insights on why SOTA LLMs achieve low performance [Hjhv, ai7d];\\n- well-designed (Rppr), clear and easy to understand (shNW, ai7d\\u200b\\u200b). \\n\\n**We believe the ratings are too harsh and not calibrated, and ask the reviewers to please reconsider the ratings.**\\n\\nBelow we include responses to some common points brought up by reviewers.\\n\\n> Scope limited to 1-sufficient CSPs. [Hjhv, Rppr]\\n\\n1. reasons for limiting the scope to CSPs:\\n\\n- A range of reasoning tasks can be formulated as CSPs as shown in our benchmark. Moreover, classic CSPs cover a wide variety of important problems studied in fields like AI and operations research [1]. \\n- As described in Section 3.1, we find that formulating information gathering as CSPs can effectively allow us to disentangle semantic ambiguity and underspecification. We are not aware of other formulations that can separate those two in such a clean way. Please note that this novel formulation is one of our core strengths, highlighted by Reviewer shNW, Rppr.\\n- As such, we believe that the CSP formulation is a foundational piece and critical tool for studying the information gathering problem in LLMs.\\n\\n2. reasons for limiting the scope to tasks that only require one question:\\n\\n- Imagine a task that requires k questions. Once the first question is answered, the task will require k-1 questions. Hence eventually all such tasks reduce to 1-sufficient CSPs.\\n- Evaluations on 1-sufficient CSPs serve as an upper bound on the performance for k-sufficient CSPs since in order to solve k-sufficient CSPs, one must be able to solve 1-sufficient CSPs. Hence 1-sufficient CSPs would be the first set of problems to tackle to improve the information gathering skills for reasoning problems. \\n- Human users may find many questions from AI assistants to be annoying. In practice, AI assistants may ask only a few questions before solving a user-specified task.\\n\\nWe have now clarified these points in the paper (see footnote 1 in introduction) and changed the title of our work to \\u201cQuestBench: Can LLMs ask the right question to acquire information in reasoning tasks?\\u201d to highlight \\u201cthe right question\\u201d (singular form).\\n\\n> applicability to real-world scenarios [Hjhv, Rppr]\\n\\nFor this work, we deliberately chose to construct underspecified reasoning tasks, covering practical problems that involve solving grade school math problems and partially observable robot planning (i.e., initial state is not fully known to the robot). These tasks highlight the basic information gathering ability of LLMs, similar to the basic math ability evaluated by GSM8K, the basic logical reasoning skills evaluated by SimpleLogic, etc. To ensure the scope of our work is clearly communicated, we have now changed the title of our work to \\u201cQuestBench: Can LLMs ask the right question to acquire information in reasoning tasks?\\u201d. Please let us know if you have better suggestions.\\n\\nThe exact practical problem space is difficult to determine without grand scale data collection for what people are using LLMs for. One can argue none of the existing academic LLM benchmarks cover the majority of \\u201cpractical problems\\u201d since they lack the ability to acquire actual human user data in real practical commercial settings.\\n\\n> How to compute accuracy [shNW, ai7d]\\n\\nAccuracy is computed by exact match with the ground-truth question. In L257, we explain that \\u201cDuring evaluation, we consider a LLM\\u2019s behavior to be correct if they produce a variable in any 1-sufficient set\\u201d. To clarify what this means: we prompt LMs to simply pick a variable to ask about, and check if the variable picked by the LM matches a ground-truth sufficient variable. Prompts for each dataset can be found in Appendix B.\\n\\n\\n> future work on method [shNW, ai7d]\\n\\nOne recommendation is to use LLMs to extract the symbolic CSP for an underspecified task and then run search algorithms to find the right variable to clarify. We have now made this clear in the discussion and conclusion section.\", \"title\": \"General reply\"}" ] }
Bvqsas4TYX
Selective Preference Optimization via Token-Level Reward Function Estimation
[ "Kailai Yang", "Zhiwei Liu", "Qianqian Xie", "Jimin Huang", "Erxue Min", "Sophia Ananiadou" ]
Recent advancements in large language model alignment leverage token-level supervisions to perform fine-grained preference optimization. However, existing token-level alignment methods either optimize on all available tokens, which can be noisy and inefficient, or perform selective training with complex and expensive key token selection strategies. In this work, we propose Selective Preference Optimization (SePO), a novel selective alignment strategy that centers on efficient key token selection without requiring strong, fine-grained supervision signals. We theoretically prove the feasibility of Direct Preference Optimization (DPO) as token-level reward function estimators, which applies to any existing alignment datasets and enables cost-efficient token selection with small-scale model sizes and training data. We then train an oracle model with DPO on the target data and utilize the estimated reward function to score all tokens within the target dataset, where only the key tokens are selected to supervise the target policy model with a contrastive objective function. Extensive experiments on three public evaluation benchmarks show that SePO significantly outperforms competitive baseline methods by only optimizing on 30\% key tokens. We also explore SePO as a new paradigm for weak-to-strong generalization, showing that weak oracle models effectively supervise strong policy models with up to 16.8$\times$ more parameters. SePO also selects useful supervision signals from out-of-distribution data, alleviating the over-optimization problem.
[ "large language models", "preference optimization", "alignment" ]
https://openreview.net/pdf?id=Bvqsas4TYX
https://openreview.net/forum?id=Bvqsas4TYX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pydYuF4Msf", "m4Ko8ulg7v", "R6ChSKi4K4", "LVWsJts1yd", "Dw4kCqMMyv" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730712738771, 1733451524936, 1729444174505, 1730721678781, 1730364708298 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6269/Reviewer_aUUy" ], [ "ICLR.cc/2025/Conference/Submission6269/Authors" ], [ "ICLR.cc/2025/Conference/Submission6269/Reviewer_x9Hj" ], [ "ICLR.cc/2025/Conference/Submission6269/Reviewer_112v" ], [ "ICLR.cc/2025/Conference/Submission6269/Reviewer_mfm1" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces Selective Preference Optimization (SePO) which optimizes model performance by selectively training only on key tokens with high token-level reward values using Direct Preference Optimization (DPO).\\n\\nThis approach significantly reduces the data requirements by focusing on 30% of the tokens, avoiding noise from less informative tokens and improving computational efficiency.\\n\\nAdditionally, this paper also explores weak-to-strong generalization, demonstrating that weaker oracle models can provide useful supervision for larger, more powerful policy models. \\n\\nExperimental results across three benchmarks show that SePO outperforms baseline methods in alignment tasks, supporting its effectiveness and adaptability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper introduces a novel token-level reward function estimator using DPO.\\n\\nSePO reduces the need for extensive token optimization, demonstrating improved alignment performance while training on only 30% of tokens. This is valuable for scaling LLMs and reducing computational overhead.\\n\\nThe weak-to-strong generalization capability of SePO allows smaller models to supervise larger ones.\", \"weaknesses\": \"The experiments primarily involve relatively moderate-sized models. Testing SePO on stronger models, such as LLaMA2-Chat-70B, would provide further insights into its scalability and potential bottlenecks, especially for the weak-to-strong generalization experiment.\\n\\nCompared to other methods, the improvement seems to be slight.\", \"questions\": \"How does SePO scale with very large policy models?\\n\\nHow to decide the token selection threshold more smartly on different datasets and models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper propose selective preference optimization (SePO). SePO select the top-k token that dominate the final reward and train DPO on these tokens to eliminate noise and improve efficiency. Experiments shows that SePO outperforms a bunch of direct -preference learning methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The strengths of this paper are listed as follows\\n\\n1. This paper observes that the total reward of a generated utterance is usually dominated by a few tokens. This observation is interesting and motivate the method well\\n\\n2. This paper propose a token-selection-based training method, which is new and interesting to me\\n\\n3. The experiments are comprehensive and results look good.\", \"weaknesses\": \"My concerns are listed as follows:\\n\\n1. My major concern is about the token selection mechanism. The motivation behind using $\\\\hat{r}(s_t,a_t)$ as the proxy of the reward is unclear to me. Theorem 1 only proved that $\\\\sum \\\\hat{r} (s_t, a_t) + V^{*}(s_1) = \\\\sum r(s_t, a_t)$, which only guarantees that the sum of $\\\\hat{r}$ and the sum of $r$ is the same (up to a constant). However, the value distribution of $r$ and $\\\\hat{r}$ might still be drastically different. Therefore, the token selection based on $\\\\hat{r}=\\\\log \\\\pi_{\\\\theta} / \\\\log \\\\pi_{\\\\text{ref}}$ does not make sense given the current illustration. \\n\\n2. It looks like that that SePO is quite sensitive to the parameter $\\\\gamma$. The search space of $\\\\gamma=\\\\{2.1, ...,2.5\\\\}$ looks weird and it seems that there is a fluctuate of performance when $\\\\gamma$ varies in this set. This makes a issue give that the improvement over baseline is not that significant.\\n\\n3. (Minor issue): The $\\\\propto$ in equation (6) looks like a typo\", \"questions\": \"See weakness section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Selective Preference Optimization (SePO), a novel strategy for aligning large language models (LLMs) at the token level by selectively optimizing only on key tokens. Leveraging Direct Preference Optimization (DPO), SePO identifies and optimizes high-impact tokens, reducing supervision costs while improving alignment performance on benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"SePO offers a cost-efficient alignment strategy by focusing on a subset of high-reward tokens, which reduces annotation costs.\\nThe method demonstrates better performance on several benchmarks, surpassing existing token-level and response-level alignment methods.\\nSePO\\u2019s weak-to-strong generalization enables effective supervision from smaller, weaker oracle models, showing scalability across varying model sizes.\", \"weaknesses\": \"1. The method is limited by the requirement that oracle and policy models share the same vocabulary and tokenizer, which reduces flexibility across different model architectures.\\n2. The use of the DPO reward format as an automated credit assignment behaviour has been attempted by other works, and the paper's contribution is weaker as only quantifies the results of this assignment to the weights of the DPO loss.\\n3. Suppose the confidence given by the Oracle model is used as the gold label for the credit distribution. In that case, we do not need dpo to fit the reward distribution given by the optimal policy (https://arxiv.org/abs/2404.12358, https://arxiv.org/abs/2408.14874). Alternatively, the paper needs to discuss the error problems associated with this approximation in the method to validate the need for SePO further.\\n4. Performance increases are relatively minor in the experiments, and the comparison model used is GPT-4-0314 (not the current optimal model, but again, not compared to the model untrained itself to provide a more intuitive increase)\", \"questions\": \"1. How performance when comparing win rates to the trained models themselves.\\n2. The model used for the experiment is a bit old and SePO needs to prove its performance on newer open source models.\\n3. The theoretical part needs to be further refined, and would like to see a discussion on whether DPO credit assignments need to be constructed through token-level weighted training, and that there should exist better ideas to take advantage of this feature of the DPO reward format.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose SePO\\u2014a method that utilizes selected tokens from an oracle model to perform preference optimization. The approach is evaluated across a wide range of models and general assistant benchmarks. The authors report that by optimizing only 30% of the tokens, they were able to surpass other methods for preference optimization.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The idea is clear and novel.\", \"The reported results indicate the promise of the approach.\"], \"weaknesses\": [\"The proof of Theorem 1, which asserts that after training a DPO, the reward function can be expressed as a decoupled reward $\\\\hat{r}$, inherits this property (Line 810) from the assumption that the reward can be written in such a manner (Assumption 1). This raises the question of whether all reward functions can be expressed in a decoupled way. From a naive perspective, a decoupled reward is not normalized, and longer texts might have larger absolute values of reward. In my attempts to learn reward models in online settings using a decoupled approach, I found that without normalization, their accuracy dramatically reduced. Normalizing the sum of small rewards over tokens led to improvements. Therefore, I strongly feel that not all rewards can be expressed in this way. Moreover, the training objective (Equation 11) uses a normalized reward, making it unclear why Theorem 1 was presented.\", \"Some of the writing is ambiguous. The SePO objective (Equation 11) is hard to parse visually and would benefit from a human-understandable explanation before the equation.\", \"The experiments lacked exploration of the dependence of performance on the KL divergence with the reference policy. It is evident that training a policy with the SePO objective could cause it to diverge significantly. This is similar to observations in Rafailov et al. [1]. For instance, could selecting a lower $\\\\beta$ value enable DPO or other baselines to perform better than SePO?\", \"[1] Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms\"], \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BvlaNTMl7P
SINAI: Selective Injection of Noise for Adversarial Robustness with Improved Efficiency
[ "Zhenyu Liu", "Garrett Gagnon", "Swagath Venkataramani", "Liu Liu" ]
Deep Neural Networks (DNNs) have revolutionized a wide range of industries, from healthcare and finance to automotive, by offering unparalleled capabilities in data analysis and decision-making. Despite their transforming impact, DNNs face two critical challenges: the vulnerability to adversarial attacks and the increasing computational costs associated with more complex and larger models. In this paper, we introduce an effective method designed to simultaneously enhance adversarial robustness and execution efficiency. Unlike prior studies that enhance robustness via uniformly injecting noise, we introduce a non-uniform noise injection algorithm, strategically applied at each DNN layer to disrupt adversarial perturbations introduced in attacks. By employing approximation techniques, our approach identifies and protects essential neurons while strategically introducing noise into non-essential neurons. Our experimental results demonstrate that our method successfully enhances both robustness and efficiency across several attack scenarios, model architectures, and datasets.
[ "Adversarial Robustness", "Efficient Neural Networks", "Hardware and Software Co-design" ]
https://openreview.net/pdf?id=BvlaNTMl7P
https://openreview.net/forum?id=BvlaNTMl7P
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uIwKHgtjzN", "ShZw65ZGif", "IQSioUwBzZ", "FL9BlNhMpp", "9ZdHKalYhu" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730590673844, 1730661173799, 1730459488744, 1730592124964, 1732035153676 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5125/Reviewer_f6J5" ], [ "ICLR.cc/2025/Conference/Submission5125/Reviewer_yUfK" ], [ "ICLR.cc/2025/Conference/Submission5125/Reviewer_Sfxu" ], [ "ICLR.cc/2025/Conference/Submission5125/Reviewer_YMx1" ], [ "ICLR.cc/2025/Conference/Submission5125/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a method that enhances both adversarial robustness and computational efficiency in DNNs. The key idea is to selectively inject noise into non-essential neurons while preserving essential neurons, identified through a learning-based approximation method. The authors demonstrate improvements in robustness against various attacks while reducing computational costs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1)\\tThe paper proposes a new approach combining adversarial robustness and efficiency.\\n2)\\tThe authors conduct extensive experiments across multiple datasets (CIFAR-10, CIFAR-100, ImageNet), and attack methods (PGD, FGSM, CW, MIFGSM, AutoAttack).\", \"weaknesses\": \"1)\\tWhile the technical contribution of this work is sound, a major weakness of the work is writing. Currently, the paper's writing suffers from two major issues: 1) The methodology is fragmented across multiple sections with poor transitions, making it difficult to understand the complete technical approach as a whole, and 2) The paper relies heavily on dense mathematical formulations without providing sufficient intuitive explanations, particularly in the theoretical sections 3.2 and 3.3.\\n2)\\tDoes the proposed method improve execution time? Authors should report execution time in addition to the reported BitOPs.\\n3)\\tThe paper doesn't explain how to optimally select the threshold for determining essential vs non-essential neurons\\n4)\\tThe current evaluation on CNN is limited to ResNet based architectures. Some other CNN variants should be investigated\", \"questions\": \"Please refer to the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a method for defending adversarial attacks by injecting random noise to \\\"non-essential\\\" neurons. They developed an algorithm to select \\\"non-essential\\\" neurons by training projecting to a low-dimensional space and training an approximation layer with quantization techniques applied. This method also has the benefit of reducing computational cost.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The presentation of this paper is good and easy to follow. The figures are intuitive and helpful. The algorithms are informative and compact.\\n\\n2. The \\\"non-essential neurons\\\" is an interesting concept. And the results seem to imply that \\\"essential\\\" and \\\"non-essential\\\" neurons play different roles in model inference, where these \\\"non-essential\\\" neurons, although not contributing much to inference results on clean examples, can be exploited by adversaries. So that if we add noise only to \\\"non-essential\\\" neurons, we can achieve a better accuracy-robustness trade-off than injecting noise uniformly. If this is valid (which I think so based on session 4.4), I think it is an interesting and important finding revealing the key approach to achieve a better trade-off.\\n\\n3. The experimental results are solid and rich. Especially I like ablation study 4.4 where I find answers to many of my previous question when reading the paper.\", \"weaknesses\": \"1. When talking about adversarial defense, it is critical to talk about the trade-off between clean accuracy and adversarial robustness. Especially for noise-injection type of approach, we know that different strength of noise-injection (e.g. ratio of injected neurons, size the random noise) will result in different clean accuracy and adversarial accuracy combinations. Therefore, instead of reporting a single point, it would be better to report the accuracy-robustness trade-off graphs by using different strength in noise injection. A single point many time cannot draw a solid conclusion. (please see details in the questions session)\\n\\n2. It is not a weakness, but I wonder if there is a good way to \\\"visualize\\\" the different roles played by essential and non-essential neurons. E.g., would be great if we can show \\\"non-essential neurons\\\" become more active when processing adversarial examples than clean examples.\\n\\n3. Again not a weakness but just a suggestion. It would be great to draw figure 4 as a acc-robustness trade-off graph where one axis is the clean acc and the other is the robust acc. It would more clear to demonstrate that injecting noise on \\\"non-essential\\\" neurons does achieves a better trade-off than injecting noise on essential neurons.\", \"questions\": \"1. What is the reason to use a low-dimension space to choose the \\\"non-essential\\\" neurons? Is it solely for computational cost consideration? What if we don't use an approximation layer but instead directly choose on the original layer?\\n\\n2. On the accuracy-robustness trade-off issue mentioned in the \\\"weakness\\\" session.\\n2.1 On table 1, the clean acc. of SINAI and RPF are 82.37 and 83.79 while the acc. with PGD are 67.27 and 61.25. What if we inject less stronger noise to make the clean acc. of SINAI also around 83.79? Supposedly it should also lead to a weaker defense and lower acc. under PGD and it would be a more precise comparison to RPF.\\n\\n2.2 Session 4.2 does not report clean acc. What is the clean acc. of methods compared in this session?\\n\\n2.3 Similarly, session 4.3 does not report clean acc.\\n\\nWould be happy to adjust my review scores if authors can address my questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work indicates that robustness can be enhanced by introducing noise into the internal layers of the target model. Additionally, it suggests that the weights can be divided into non-essential and essential neurons, which play different roles during model prediction.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This work provides extensive experiments to support the proposed idea.\", \"weaknesses\": \"1. **Poor Presentation:** In Section 1, the authors state, \\\"we introduce injecting noise to non-essential neurons to enhance the robustness and efficiency of DNNs.\\\" However, they do not clearly explain why non-uniform noise is needed or the purpose of distinguishing between non-essential and essential neurons. The authors only provide a basic introduction to noise-based defense in Section 2.\\n\\n2. **Unclear Motivation:** As mentioned previously, it is difficult for readers to follow why the proposed idea\\u2014an learning-based method to identify essential neurons and inject noise into non-essential neurons\\u2014is necessary. The AWP also introduces extra perturbations into each layer, which are non-uniform. The authors should clearly explain how their approach differs from prior works to strengthen the motivation.\\n\\n3. **Unclear Implementation:** The paper introduces the proposed algorithm primarily with fully connected (FC) layers. However, in Wide Residual Networks (WRNs) or CNN-based networks, the FC layer typically appears only as the last layer. Does this imply that the proposed method can only be applied to the last layer? Additionally, the role of \\u03d5 in Algorithm 2 is not defined in the main text.\\n\\n4. **Unfair Comparison:** The attacking step for APGD is set to 20 in this work (lines 944-955), while the default setting is 100. This discrepancy suggests that the robustness may be overestimated.\\n\\n5. **Overestimated Robustness:** If I understand correctly, random noise is injected in Algorithm 2 (step 2). However, AutoAttack is not a suitable method for assessing defenses that involve stochastic processes [*1].\\n\\n[*1] https://github.com/fra31/auto-attack/issues/58\", \"questions\": \"A fair comparison is needed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores a new method to improve both the adversarial robustness and the execution efficiency of DNN by identifying essential and non-essential neurons. While essential neurons are protected, non-essential neurons are injected with noise. To identify essential neurons, the paper adopts the learning-based approximate method. In general, the proposed method can improve adversarial robustness and reduce computational costs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The motivation of the paper is clear and the idea behind why injecting noise into non-essential neurons intuitively makes sense.\", \"The paper is well-structured and easy to follow.\", \"Experiments were conducted against popular white-box attacks and the empirical improvements compared to baselines seem significant.\"], \"weaknesses\": [\"My main concerns are:\", \"Some of the theoretical parts do not support the authors' claims about the improvement in robustness and clearly help the clean accuracy preservation.\", \"The baseline is not SOTA and recent\", \"Some parts of the paper are not well written.\", \"Please see the section on questions for a list of my detailed concerns.\"], \"questions\": \"## Major\\n1. The contribution is not well written. The first and the second contributions are not distinguished enough. Both contributions state the proposed method is novel and retaining the essential neurons and employing selective noise injection to enhance both robustness and clean accuracy or model efficiency. The authors should rewrite and make the contribution more distinguished and concise. For example, you could propose that they focus on the novelty of the method in one point, and its specific benefits (robustness and efficiency) in another. \\n2. While baselines and competitors are adversarially trained models, the authors should clarify whether SINAI is applied to adversarially trained or non-adversarially trained models and provide results for both scenarios if possible. This would help validate your claim about the method's generalizability as the authors claimed the proposed method is general and can be applied to any pre-trained network without retraining from scratch. If the based model is non-adversarially trained, it seems the clean accuracy is very low e.g. the expected clean accuracy of resnet18 on CIFAR-10 should be higher 90%. \\n3. Some notations and terms are not well explained and described e.g. what is $Loss_{orignal}$ in algorithm 1 and $Q(x)$ in algorithm 2, and $k, d$ in line 182. What is $P_{pq}$ in lines 256-257? What is $s$? Hence, I suggest the authors should include a notation table or provide explicit definitions when these terms are first introduced. This would significantly improve the paper's readability.\\n4. What is the convergence in algorithm 1? The paper lacks a complete definition of the condition of convergence. \\n5. In contribution, the authors state that the proposed method can be applied to any pre-trained model without retraining from scratch. Why are model parameters $W, b$ updated in algorithm 1? \\n6. How is sparse random projection $P$ randomly generated? Since theorem 1 confirms that a random projection matrix $P$ exists but not any $P$ such that the clean accuracy can be preserved, it is unclear if sparse random projection $P$ can guarantee the preservation of clean accuracy. \\n7. Section 3.4 and theorem 2 are general and useful but it is unclear how this can guarantee robustness improvement of SINAI since it does not state how SINAI improves adversarial robustness. The authors should clearly explain how theorem 2 can apply to SINAI. Since the noise needs to balance both clean accuracy preservation and adversarial robustness enhancement, the author should explain how the noise in SINAI helps obtain both objectives of improving clean accuracy and enhancing adversarial robustness?\\n8. The improvement in the evaluation section, particularly reported in Tables 1, 2 and 4, is misleading and not well explained. Since SINAI is not developed on top of overfitting adversarial training (OAT), why is the improvement compared with OAT? Additionally, OAT is fairly old and is not SOTA, the improvement of SINAI should be compared with SOTA methods such as RPF. \\n9. Some recent SOTA adversarially trained models [2, 3 and 4] have been demonstrated on Robustbench [1]. Since OAT is not SOTA, do the authors compare SINAI with one of the methods in [2, 3 and 4], or how does your method perform if one of them is incorporated into SINAI? \\n10. While the authors show the scalability of SINAI against two different perturbation budgets on CIFAR-10 and CIFAR-100 when compared with OAT, it is is unclear how it is compared with other SOTA methods e.g. RPF since RPF could achieve better robustness than SINAI at high perturbation budgets. Therefore, I suggest that the authors extend your comparisons on state-of-the-art methods like RPF, while still including OAT as a reference point. This would provide a clearer picture of SINAI's contributions to the current state of the field.\\n11. A threshold is used to identify essential and non-essential neurons but it is not well described and analysed in section 4 (Evaluation). How did the authors select m and threshold to obtain 10% essential neurons? Is it 10% neurons of each layer? It is unclear how well the proposed method performs with different percentages of essential neurons. \\n12. What is the noise injection ratio? How do you choose the best noise injection ratio for each dataset? The authors should provide an analysis of the noise injection ratio for Imagenet.\\n\\n## Minor\\n1. If possible, please sort the citations in time order. For example, lines 077-079 or lines 123-124.\\n2. The experiment settings are not well described. For instance, it is unclear all attacks in Section 4 are l2 or linf attacks and what perturbation budgets are used in experiments in Section 4.1 and 4.3. Is the entire test set of each dataset (CIFAR-10, CIFAR-100 and ImageNet) used for evaluation? If not, the authors should describe it clearly.\\n3. In section 4.3, no clean accuracy is reported for Imagenet.\\n\\n[1] https://robustbench.github.io/ \\n\\n[2] Bartoldson, Brian, James Diffenderfer, Konstantinos Parasyris and Bhavya Kailkhura. \\u201cAdversarial Robustness Limits via Scaling-Law and Human-Alignment Studies.\\u201d ICML2024\\n\\n[3] Wang, Zekai, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu and Shuicheng Yan. \\u201cBetter Diffusion Models Further Improve Adversarial Training.\\u201d ICML2023\\n\\n[4] Sehwag, Vikash, Saeed Mahloujifar, Tinashe Handina, Sihui Dai, Chong Xiang, Mung Chiang and Prateek Mittal. \\u201cRobust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?\\u201d ICLR 2022.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
BvQkjCnXXr
Simple Yet Efficient Locality Sensitive Hashing with Theoretical Guarantee
[ "Zongyuan Tan", "Hongya Wang", "Bo Xu", "Minjie Luo", "Ming Du" ]
Locality-sensitive hashing (LSH) is an effective randomized technique widely used in many machine learning tasks such as outlier detection, neural network training and nearest neighbor search. The cost of hashing is the main performance bottleneck of these applications because the index construction functionality, a core component dominating the end-to-end latency, involves the evaluation of a large number of hash functions. Surprisingly, however, little work has been done to improve the efficiency of LSH computation. In this paper, we design a simple yet efficient LSH scheme, named FastLSH, by combining random sampling and random projection. FastLSH reduces the hashing complexity from $O(n)$ to $O(m)$ ($m<n$), where $n$ is the data dimensionality and $m$ is the number of sampled dimensions. More importantly, FastLSH has provable LSH property, which distinguishes it from the non-LSH fast sketches. To demonstrate its broad applicability, we conduct comprehensive experiments over three machine learning tasks, i.e., outlier detection, neural network training and nearest neighbor search. Experimental results show that algorithms powered by FastLSH provides up to 6.1x, 1.7x and 20x end-to-end speedup in anomaly detection latency, training time and index construction, respectively. The source code is available at https://anonymous.4open.science/r/FastLSHForMachineLearning-7CAC.
[ "Locality-sensitive hashing", "random sampling", "machine learning" ]
Reject
https://openreview.net/pdf?id=BvQkjCnXXr
https://openreview.net/forum?id=BvQkjCnXXr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nLIungio2F", "j5llCehqQE", "aHo4Uc0BD6", "XeUoOTC1oy", "V3clRT9e6W", "UXzzmHtQI8", "Ot9n9o2M0d", "NPx2J0vPfo", "N5uMnbY724", "Gmmv0SKndz", "G9qxheZJ09", "CvbWAiUqpy", "BN7NV6mxH9", "8FVhAtiKqE", "6G9weeifrl" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1732482006442, 1731186441557, 1731639877013, 1731637958161, 1731637883865, 1737523744278, 1731639614061, 1729336621430, 1732434767952, 1730698606718, 1731644329297, 1730647825229, 1734558535560, 1731638034937, 1731639815543 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6094/Reviewer_ze7N" ], [ "ICLR.cc/2025/Conference/Submission6094/Reviewer_ze7N" ], [ "ICLR.cc/2025/Conference/Submission6094/Authors" ], [ "ICLR.cc/2025/Conference/Submission6094/Authors" ], [ "ICLR.cc/2025/Conference/Submission6094/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6094/Authors" ], [ "ICLR.cc/2025/Conference/Submission6094/Reviewer_yqFQ" ], [ "ICLR.cc/2025/Conference/Submission6094/Reviewer_sEFP" ], [ "ICLR.cc/2025/Conference/Submission6094/Reviewer_XLCc" ], [ "ICLR.cc/2025/Conference/Submission6094/Reviewer_yqFQ" ], [ "ICLR.cc/2025/Conference/Submission6094/Reviewer_sEFP" ], [ "ICLR.cc/2025/Conference/Submission6094/Area_Chair_pH5w" ], [ "ICLR.cc/2025/Conference/Submission6094/Authors" ], [ "ICLR.cc/2025/Conference/Submission6094/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I thank the authors for their response. I am still not convinced that the ideas in the paper add significant value over prior work. From what I understand, compared to ACHash, FastLSH doesn't have the Hadamard transformation step (which is crucial for such schemes to work in general), and in the second step, instead of subsampling the coordinates without replacement, it uses subsampling with replacement.\\n\\nAt a conceptual level, the only place where the paper is suggesting something different is the subsampling scheme. Can the authors clarify if there is any significant conceptual advantage of subsampling with replacement, compared to without replacement as was done in ACHash? For example, are there reasonable settings, where one would expect the proposed subsampling scheme to do better and why? \\n\\nAs far as the provable guarantees in the paper are concerned, the paper shows equivalence with LSH for m going to infinity, which defeats the whole purpose of subsampling. It is unclear if that is a meaningful contribution.\"}", "{\"summary\": \"The paper focusses on making locality sensitive hashing (LSH) faster under the \\\\ell_2 metric. The standard LSH scheme involves taking an inner product of the query with a random vector and bucketing the query according to the obtained value. The paper instead proposes to speed up this operation by first subsampling m coordinates of the vector and computing the inner product with the corresponding subsampled vector. It is shown that as m tends to infinity, the probability of collision under the proposed scheme is same as the standard LSH. The paper also shows the superior performance of the proposed scheme empirically.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Locality sensitive hashing is used widely, so any effort in speeding it up is welcome as it can have huge practical significance.\", \"weaknesses\": \"Dasgupta et. al. [1] came up with a two-step proposal to speed up the standard LSH using fast Johnson\\u2013Lindenstrauss transform. The LSH scheme proposed in this paper essentially removes the first step. However, this step is crucial especially when the dataset consists of sparse vectors. Thus, this paper seems to rediscover some of the ideas already present in [1], while missing the crucial ingredients.\", \"more_details\": \"The hash function proposed in [1] consists of two steps: (i) First multiply the query vector by a diagonal matrix with diagonal entries chosen to be 1 or -1 equiprobably. Then hit the vector obtained by a Hadamard matrix. (ii) Subsample roughly m coordinates of the resulting vector uniformly at random (without replacement), take the inner product of the resulting subsampled vector with a random gaussian vector and finally bucket the query according to the obtained value (sub sampling is actually done by choosing each coordinate with some fixed probability q = m/d). The first step is crucial when the vectors involved are sparse. In that case, most of the contribution to the \\\\ell_2 distance comes from very few non-zero coordinates. Therefore, for the subsampling to be effective, m will need to be very high, defeating the main purpose. The first step applies a norm-preserving rotation to the vectors, with the desirable property that the vector so obtained is dense, that is, no entry is too large with high probability.\\n\\nThe scheme proposed in this paper essentially applies the second step but where the subsampling is done with replacement. However, not applying the first step means m will need to be very large for sparse vectors. That is why the paper could only show asymptotic equivalence (for m going to infinity) between the proposed scheme and the standard LSH. In contrast, Dasgupta et.al. prove that the collision probability of their proposed scheme is close to the standard LSH for m = O(log d). \\n\\n[1] Dasgupta, Anirban, Ravi Kumar, and Tam\\u00e1s Sarl\\u00f3s. \\\"Fast locality-sensitive hashing.\\\" Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. 2011.\", \"questions\": \"Any clarification on points raised in the weaknesses section would be helpful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continued Comment to Reviewer yqFQ\", \"comment\": \"## Response to W3.2\\nFor ANN search tasks, many papers [1-4] randomly select a subset from the given query set, such as 50, 100, or 200. We have also adopted this approach.\\n\\n[1] Huang, Qiang, et al. Query-aware locality-sensitive hashing for approximate nearest neighbor search. In Proceedings of the VLDB Endowment 9.1 (2015): 1-12.\\n\\n[2] Sun, Yifang, et al. SRS: solving c-approximate nearest neighbor queries in high dimensional euclidean space with a tiny index. In Proceedings of the VLDB Endowment (2014).\\n\\n[3] Lei, Yifan, et al. Locality-sensitive hashing scheme based on longest circular co-substring. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data. 2020.\\n\\n[4] Tian, Yao, Xi Zhao, and Xiaofang Zhou. DB-LSH 2.0: Locality-sensitive hashing with query-based dynamic bucketing. IEEE Transactions on Knowledge and Data Engineering (2023).\\n\\n\\n## Response to W3.3\\nThe acceleration of FastLSH does not require any hyperparameter tuning across datasets and is solely dependent on the chosen sampling dimension $m$. Since we report the total execution time for the anomaly detection task, which includes both index construction and query time, the execution time appears to vary significantly due to different dataset sizes and choices of $m$, as shown in Table 4 and Parameter Settings in Appendix C.1. Furthermore, the results in the anomaly detection and neural network training tasks, as shown in Table 1, Table 2 and Table 3 and Figure 2, demonstrate that FastLSH does not introduce distortions in the Hamming distances.\"}", "{\"title\": \"Commnet to Reviewer ze7N\", \"comment\": \"Thanks for your feedback. Next we will address the main points you raised.\\n\\n## Difference Between FastLSH and ACHash\\nTo answer the question raised, we need to first make it clear what the provable LSH property is. A formal definition can be found in Def. 2.1, and a somewhat informal description is that the collision probability should decrease with the distance between the given pair of vectors. Only with the property, we can enjoy nice features that LSH-style algorithms deliver [1]. \\n\\nFor ACHash [2], unfortunately, this property does not hold because it can only offer a JL-transformation-style lower/upper bound on the collision probability. To be precise, suppose the distance between vectors $x_1$ and $y_1$ is $s_1$ and the distance between vectors $x_2$ and $y_2$ is $s_2$ and $s_1$ < $s_2$, it is impossible for ACHash to say the collision probability $p(s_1)$ > $p(s_2)$ because it only has information about the loose lower/upper bound on the collision probability. In a nutshell, the JL-transformation-style lower/upper bound cannot deliver the LSH property. \\n\\nIn contrast to ACHash, FastLSH achieves this goal by deriving the exact collision probability DIRECTLY in Theorem 4.2, allowing one to calculate precisely the probability of collision for any pair of vectors with distance $s$. To deal with the new random variable $\\\\tilde{s}X$, we overcome quite a lot technical difficulty (Lemma 4.4 and Appendix A.2) to derive the PDF of $\\\\tilde{s}X$ in Eqn. (9). Figure 8 and Figure 9 in Appendix C.7 illustrate a comparison of the $\\\\rho$ curves, an important measure of the LSH property, for E2LSH [1] and FastLSH. These figures show that their $\\\\rho$ curves match well across different datasets, verifying that FastLSH and E2LSH have the same LSH performance. Note that it is impossible ACHash to plot such $\\\\rho$ curves. \\n\\nOverall, AChash does not possess provable LSH property. Although FastLSH and AChash are similar in some aspects, the theoretical analysis methods of AChash are fundamentally different from those of FastLSH.\\n\\n[1] Datar, M., Immorlica, N., Indyk, P., and Mirrokni, V. S. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the twentieth annual symposium on Computational geometry, pp. 253\\u2013262, 2004.\\n\\n[2] Dasgupta, Anirban, Ravi Kumar, and Tam\\u00e1s Sarl\\u00f3s. Fast locality-sensitive hashing. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. 2011.\\n\\n## FastLSH Handles Sparse Data\\nWe are already aware of the issue you have raised. FastLSH can handle sparse data, as discussed in Appendix C.6. We use the MNIST dataset to validate this claim, and the results are shown in Table 9 in Appendix, where MNIST is a sparse dataset with around 2.6% non-zero elements. As you have focused on, for very sparse vectors, we can use the Hadamard transform to make the data dense. This makes FastLSH and AChash similar, so FastLSH can be considered a generalized version of AChash. In many practical applications, data is often dense, in which case FastLSH can be used directly. For sparse vectors, we apply the Hadamard transform for data preprocessing. This enhances the adaptability of FastLSH compared to AChash, enabling faster index construction and better query performance.\"}", "{\"title\": \"Comment to Reviewer XLCc\", \"comment\": \"Thanks for your feedback. Next we will address the main points you raised.\\n## Response to Weaknesses\\nOur work focuses on using LSH-based method to accelerate end-to-end index construction, supporting machine learning tasks such as anomaly detection, neural network training, and ANN search. For LSH-based applications (e.g., the ACE method for anomaly detection [1]), its most significant advantages lie in low memory usage (required less than **4MB** memory) and fast query processing (reached up to **150x**), while maintaining comparable query accuracy to other advanced methods. We have chosen ACE as a SOTA method. Although many graph-based methods (e.g., HNSW [2]) achieve higher query accuracy in ANN search, they are not suitable for accelerating end-to-end index construction in LSH-based applications. Research has shown that when HNSW is used to speed up neural network training, its time consumption is **23** times higher than using LSH, and HNSW does not have guarantees for search performance [3]. For these LSH-based applications, a notable characteristic is that index construction time is more important than query processing, and the execution time is mainly dominated by the hashing cost, so that FastLSH significantly reduces the hashing cost, as shown in Figure 2, Table 1, Table 2 and Table 3. Furthermore, LSH is a fundamental component for high-dimensional ANN search, and we selected E2LSH [4] and MPLSH [5] as baselines because they are commonly used in practice. Our goal is to verify why FastLSH can improve query accuracy in anomaly detection and neural network training tasks, as shown in Figure 2, Table 1, Table 2 and Table 3. This is because FastLSH not only achieves comparable query accuracy to E2LSH and MPLSH but also significantly speeds up end-to-end index construction, as shown in Figure 3, Figure 4, Figure 6 and Table 9 in Appendix.\\n\\n[1] Luo, Chen, and Anshumali Shrivastava. Arrays of (locality-sensitive) count estimators (ace) anomaly detection on the edge. In Proceedings of the 2018 World Wide Web Conference. 2018.\\n\\n[2] Cong Fu, Chao Xiang, Changxu Wang, and Deng Cai. Fast approximate nearest neighbor search with the navigating spreading-out graph. arXiv preprint arXiv:1707.00143, 2017.\\n\\n[3] Chen, Beidi, et al. Mongoose: A learnable lsh framework for efficient neural network training. In Proceedings of International Conference on Learning Representations. 2020.\\n\\n[4] Datar, M., Immorlica, N., Indyk, P., and Mirrokni, V. S. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the twentieth annual symposium on Computational geometry, pp. 253\\u2013262, 2004.\\n\\n[5] Qin Lv, William Josephson, Zhe Wang, Moses Charikar, and Kai Li. Multi-probe lsh: efficient indexing for high-dimensional similarity search. In Proceedings of the 33rd international confer ence on Very large data bases, pp. 950\\u2013961, 2007.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Comment to Reviewer yqFQ\", \"comment\": \"Thanks for your feedback. Next we will address the main points you raised.\\n\\n## Response to W1\\nOur work focuses on using LSH-based method to accelerate end-to-end index construction, supporting machine learning tasks such as anomaly detection, neural network training, and ANN search. It is worth noting that FastLSH and ACHash do not necessarily reduce query time; their primary purpose is to speed up the hash evaluations. More importantly, LSH can be applied to other tasks beyond ANN search, such as outlier detection and neural network training. For these tasks, only $k \\\\times L$ hash tables need to be built, and distance computations are not required. Additionally, these tasks might require frequent creation or updating of hash tables, such that the execution time is mainly dominated by the hashing cost, then a notable characteristic is that index construction time is more important than query processing. In such applications, FastLSH significantly reduces the hashing cost, as shown in Figure 3, Table 1, Table 6 and Table 7 in Appendix C.3. Although many graph-based methods (e.g., HNSW [1]) achieve higher query accuracy in ANN search, they are not suitable for accelerating end-to-end index construction in these applications. Research has shown that when HNSW is used to speed up neural network training, its time consumption is **23** times higher than using LSH, and HNSW does not have guarantees for search performance [2]. \\n\\nRegardless of whether it is the SLIDE framework or other frameworks (ACE [3], et al), if LSH is used for hashing involving inner product computations, FastLSH can integrate well into these frameworks, e.g., [4-6]. This is because, in the case of $m < n$, Lemma 4.8 and Fact 4.9 rigorously prove that FastLSH and E2LSH are very similar, by comparing the first four moments of the two distributions, despite the presence of data-dependent parameters $\\\\epsilon$ and $\\\\lambda$ introduced by $\\\\sigma$. Additionally, the empirical results in Table 9 in Appendix show that as $m$ increases, the influence of these parameters $\\\\epsilon$ and $\\\\lambda$ becomes very small, further indicating that FastLSH and E2LSH are very close. \\n\\nIt should be noted that the end-to-end index construction time for Gist1M is not approximately 30 seconds; we have scaled the results for better display, as shown in Figures 4(d) and 6(d). In fact, even when using FastLSH, with a sampled dimension of $m=30$ and under our $k \\\\times L$ settings, the index construction time takes at least 200 seconds. In contrast, when using 960 dimensions with E2LSH, the time consumption is even higher, with end-to-end index construction taking over a thousand seconds.\\n\\n[1] Cong Fu, Chao Xiang, Changxu Wang, and Deng Cai. Fast approximate nearest neighbor search with the navigating spreading-out graph. arXiv preprint arXiv:1707.00143, 2017.\\n\\n[2] Chen, Beidi, et al. Mongoose: A learnable lsh framework for efficient neural network training. In Proceedings of International Conference on Learning Representations. 2020.\\n\\n[3] Luo, Chen, and Anshumali Shrivastava. Arrays of (locality-sensitive) count estimators (ace) anomaly detection on the edge. In Proceedings of the 2018 World Wide Web Conference. 2018.\\n\\n[4] Chen, Beidi, et al. Mongoose: A learnable lsh framework for efficient neural network training. In Proceedings of International Conference on Learning Representations. 2020.\\n\\n[5] Kitaev, Nikita, \\u0141ukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451 (2020).\\n\\n[6] Rabbani, Tahseen, Marco Bornstein, and Furong Huang. Large-Scale Distributed Learning via Private On-Device LSH. In Proceedings of Advances in Neural Information Processing Systems 36 (2024).\\n\\n## Response to W2.1\\n\\nFor given vector pair $(\\\\mathbf{v},\\\\mathbf{u})$, let $s = || \\\\mathbf{v}-\\\\mathbf{u} ||$. For our purpose, assume the collection of $n$ entries $(v_{i}-u_{i})^{2}$ $(i=1,2,\\\\ldots,n)$ is a population, which follows an unknown distribution with mean $\\\\mu =( {\\\\textstyle \\\\sum_{i=1}^{n}}(v_{i}-u_{i})^{2}) /n$ and variance $\\\\sigma^{2}=({\\\\textstyle \\\\sum_{i=1}^{n}}((v_{i}-u_{i})^{2}-\\\\mu)^{2})/ n $. It is obvious that, for any given pair of vectors of finite dimension $n$, the 2-norm of the difference across each dimension MUST has a finite mean $\\\\mu$ and finite variance $\\\\sigma^2$. Here, each entry $(v_{i}-u_{i})^{2}$ for $(i=1,2,\\\\ldots,n)$ is independently sampled from an unknown distribution. Therefore, we can use the CLT [1] to derive Lemma 4.1.\\n\\n[1] https://en.wikipedia.org/wiki/Central_limit_theorem\"}", "{\"summary\": \"This paper proposes a new LSH method, named FastLSH, which aims to accelerate the indexing process of traditional LSH. The authors provide theoretical analysis to argue that their approach retains the fundamental LSH property: that the closer two points are, the higher the probability they will collide in the same hash bucket. The empirical contribution is presented through three groups of experiments, where FastLSH is applied in outlier detection, specialized neural network training (SLIDE) using LSH, and nearest neighbor search.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper presents a new approach, FastLSH, which introduces a new improvement to the indexing process of canonical LSH. The idea of selectively sampling dimensions to accelerate indexing is well-motivated, and the authors attempt to show they retain the core LSH property through both theoretical analysis and empirical validation. They do extensive experiments. The paper is easy to understand.\", \"weaknesses\": \"**W1. Weak Justification of the Research Problem\\u2019s Value**\\n\\nThe paper does not sufficiently establish the practical importance of accelerating the indexing process for LSH. Hashing-based methods are already among the fastest algorithms for indexing when compared to quantization-, tree-, and graph-based methods. Among these, LSH is known for fast indexing. From the three applications discussed (outlier detection, neural network training, and nearest neighbor search), it is not evident that further improving LSH indexing speed is a critical need.\\n\\nFor example, in the nearest neighbor search application, the authors report that indexing the GIST1M dataset (960-dimension, 1 million points) takes under 30 seconds\\u2014which is already sufficiently fast for most applications. The paper\\u2019s focus on indexing overlooks a more pressing issue: search efficiency, where LSH typically performs poorly compared to recent graph-based methods such as HNSW [1]. Improving search efficiency would address a more significant problem, which is why little effort in the literature is devoted to accelerating LSH indexing. While the SLIDE framework may benefit from faster indexing due to frequent re-indexing, it is a specialized case, and more examples are needed to justify the broader value of this work.\\n\\n**W2. Theoretical Flaws and Insufficient Justification**\\n\\nThe theoretical analysis contains several potential flaws and requires further rigorous justifications to support the authors' claims:\\n\\n**W2.1 Central Limit Theorem (CLT) Application:**\\n\\nIn Lemma 4.1, the authors apply the CLT, but the conditions for its use are not fully satisfied. CLT assumes i.i.d. samples, yet the proposed method ensures only that the selected dimensions are independently sampled. The elements within each dimension may not be i.i.d. since the authors assume only finite mean and variance for the distribution of $(u_i-v_i)^2$. Since the underlying distribution is unknown, more justification is needed to ensure that the CLT applies. This step is crucial since it forms the foundation for the entire theoretical framework.\\n\\n**W2.2 Asymptotic Convergence of Characteristic Functions:**\\n\\nIn Theorem 4.6, the authors aim to demonstrate that the ratio of the characteristic functions of the original and transformed distributions converges to 1 as $m\\\\rightarrow \\\\infty $. However, the convergence depends on the behavior of the input to the characteristic function, x. The authors must show that the ratio of the characteristic functions converge for all values of x. The claim that $x^2\\\\leq O(m^{-1})$ is \\u201cobvious\\u201d is problematic, as no rigorous justification is provided. This is critical since the convergence ratio will diverge for non-zero x if this condition does not hold.\\n\\n**W2.3 Practicality of the Asymptotic Results:**\\n\\nEven if Theorem 4.6 holds, the requirement that $m\\\\rightarrow \\\\infty $ raises practical concerns. Since the authors propose using fewer dimensions (m < n), they must demonstrate that the asymptotic results still hold in practice. Specifically, the authors should quantify how far the transformed distribution deviates from the target distribution under finite m and provide a lower bound for the distribution distance under a suitable metric. Section 4.3 contains only heuristic arguments, making it unclear to what extent the LSH property is preserved after dimension sampling. A more rigorous analysis is required to confirm that the proposed method retains the LSH property, or at least an approximate version of it.\\n\\n**W3. Weaknesses in Experimental Design and Results**\", \"several_flaws_in_the_experimental_design_limit_the_contribution_of_this_paper\": \"**W3.1 Application Scenarios Are Not Well-Aligned with Claims:**\\n\\nThe authors argue that their method is beneficial for scenarios requiring frequent re-indexing. However, the experiments do not reflect such settings. For example, no experiments are conducted on streaming data, which would be a more relevant use case. Moreover, the authors should compare their method to state-of-the-art approaches for high-dimensional data streams, such as [2][3].\\n\\n**W3.2 Incomplete Use of Standard Datasets:**\\n\\nThe authors use well-known datasets such as SIFT, Glove, and GIST, but they do not utilize all queries in these datasets. For instance, the SIFT dataset contains 10,000 queries, yet only 200 were used in the experiments. This is unusual, and the paper provides no justification for this choice. The authors should explain how the subset was selected and whether this affects the search performance.\\n\\n**W3.3 Diverse Speedup in Outlier Detection Task:**\\n\\nThe reported speedup of FastLSH over baseline methods varies significantly across datasets, especially in the outlier detection task. This raises concerns about whether the proposed method introduces distortions in the Hamming distances or requires dataset-specific hyper-parameter tuning. Either issue would limit the generality of the method and should be thoroughly investigated and reported.\", \"questions\": \"S1: Strengthen the theoretical analysis and address the problems in W2\", \"s2\": \"Add suitable experiments and give proper analysis to demonstrate the strengths of this paper to reach ICLR standard.\\nS3. Justify the research value of this problem with broad use cases.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed response! It addressed some of my concerns, and I have some follow-up questions and request for clarification:\\n\\n**Regarding (W1):**\\n\\nI appreciate your explanation of applying the Hadamard transformation as a solution. While I agree that it can help mitigate the issue, I still have the following concerns:\\n\\n(1) By applying the Hadamard transformation, the dataset is converted into a dense one. It remains unclear whether FastLSH is still effective when handling sparse or maliciously designed datasets.\\n\\n(2) This solution makes FastLSH appear similar to ACHash. Could you elaborate more on the advantages of FastLSH in this context?\\n\\n**Regarding (w2):**\\n\\nI mostly agree with your response here, but I have a few follow-up questions:\\n\\n(1) In Figure 3, there are two rows of plots with the same x-axis and y-axis. Why not combine these rows for a clearer and more concise presentation? Are the two rows using different parameter settings, or is there another reason for separating these methods?\\n\\n(2) In the results presented in Figure 3, the speed-up in indexing time appears to be significant only for the Trevi dataset. Could you explain why FastLSH performs better on Trevi compared to the other datasets? Is its performance dataset-dependent, and what conditions of a dataset favor its performance?\\n\\n(3) While the speed-up achieved by FastLSH on the Musk dataset (Table 1) is significant, its improvement on the Statlog Shuttle dataset (Table 3) seems relatively modest. Similar to question (2), could you provide additional explanations or experimental results to clarify whether FastLSH consistently achieves significant speed-ups across different datasets?\\n\\n**Regarding (W3):**\\n\\nI appreciate that the response highlighted experiments with various small values of $m$. However, my initial question aimed to understand why, the infinite $m$ in theory can often be replaced by a small $m$ while still maintaining correctness in practice. Since this concern wasn\\u2019t fully addressed, could you provide more theoretical analysis (or empirical insights) to clarify this point?\"}", "{\"summary\": \"This paper aims at the efficiency of LSH methods while not harming its effectiveness. It reduces the cost of computing hashing functions by random sampling. The authors verify the effectiveness of their methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1. The method seems sound.\\n\\nS2. This paper studies important problems. \\n\\nS3. It is well written.\", \"weaknesses\": \"W1. The experiments focus on LSH based methods for ANNS, outlier detection. However, there are other methods such as proximity graphs for ANNS and OD. Besides, LSH based methods are not the SOTA for both of them. Even though LSH methods are enhanced, it does not really make a progress to ANNS and OD.\", \"questions\": \"Q1. I would see more experiments to demonstrate the the method in this paper outperform the SOTA method for ANNS and outlier detection.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Author Feedbacks\", \"comment\": \"**Response w.r.t. W2.1 Feedback**\\n\\nAs shown in CLT, the application condition assumes a sequential **i.i.d. random samples**. Though the dimensions are independently selected, whether the samples, i.e. the samples $(v_i-u_i)^2$, are i.i.d is questionable. IMHO, the i.i.d. assumption may hold if $v_i$ across different dimensions and samples should also be drawn i.i.d. from a distribution. \\n\\n**Response w.r.t. W2.3 Feedback**\\n\\nGiven that more rigorous theoretical guarantee is not given. The term \\\"with theoretical guarantee\\\" is to some extent an overclaim. With the problem in W2.2 not properly addressed, this reduces the theoretical contribution of this paper.\\n\\n**Response w.r.t. W3.1 Feedback**\\n\\nSorry for missing the references, as below:\\n[1] Malkov, Yu A., et al. \\\"Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs.\\\"\\u00a0\\n[2] Yang, Chengcheng, et al. \\\"Efficient locality-sensitive hashing over high-dimensional data streams.\\\"\\n[3] Wang, Hao, et al. \\\"Efficient locality-sensitive hashing over high-dimensional streaming data.\\\"\\n\\n**Response w.r.t. W3.2 Feedback**\\nThough the references are given, majority works in this literature follows the official split of these datasets. Given the fact that hashing based methods are quite fast, it is unnecessary to select a subset of queries. By doing so, it is more difficult to position the contribution of this paper in a wider range of literature.\\n\\n**Summary**\\n\\nGiven the feedback from the authors, the problems within the theoretical framework are not well addressed, especially the authors admit the flaws in Theorem 4.6, which is a key part. The term \\\"with theoretical guarantee of LSH properties\\\" is somewhat overclaimed. \\n\\nMeanwhile, the empirical assessments are not sufficient to show the value in terms of wide application of the proposed methods. For example, given the authors have provided more works using LSH in model training, they should provide more experiments rather than just for SLIDE.\\n\\nIMHO, to well position the contribution of a simple methodology, one can either provide rigorous theoretical guarantees, or conduct extensive and comprehensive experiments to demonstrate their wide usability. Given the current status of this paper and the feedback, I may not change the score.\"}", "{\"summary\": \"This paper introduces FastLSH, a novel locality-sensitive hashing scheme that combines random sampling and random projection to reduce the hashing complexity from $O(n)$ to $O(m)$, where $m$ is the number of samplings and $m<n$.\\nFastLSH is claimed to preserve the LSH properties, i.e., the collision probability can be calculated like that in E2LSH. \\nThe faster hash computations in FastLSH make it well-suited for tasks like anomaly detection, neural network training, and nearest neighbor search.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. FastLSH is simple, it can be easily implemented and can be seamlessly integrated into existing LSH-based applications.\\n\\n2. The paper provides rigorous theoretical proofs that FastLSH retains the desirable LSH properties.\", \"weaknesses\": \"1. The theoretical guarantees hold only when the number of sampled dimensions, $m$, approaches infinity.\\nIn real-world applications, it is possible to construct dense data sets on which FastLSH may fail. For example, consider a data set in which a large proportion of dimensions are the same, with only a few being non-trivial. \\nIn such cases, FastLSH is likely to miss the non-trivial dimensions during the sampling process and end up hashing all data points into the same bucket. This raises concerns about potential theoretical flaw in FastLSH. \\nTherefore, it would be beneficial for the authors to demonstrate the effectiveness of FastLSH on relatively rare and challenging scenarios, as exemplified above.\\n\\n2. While FastLSH reduces the cost of hash function computations and index construction time, it does not speed up the query process itself. This limits its broader impact on applications where query speed is critical.\\n\\n3. The paper does not sufficiently explore the effect of varying the parameter $m$, the number of sampled dimensions, on both the efficiency and accuracy of FastLSH. \\nSince $m$ plays a crucial role in balancing computational savings with hashing accuracy, understanding its impact across a range of values is essential. \\nWithout a thorough parameter study, it remains unclear how to optimally set $m$ for different datasets or applications.\", \"questions\": \"1. How does FastLSH handle scenarios where only a few dimensions carry critical information, while others are redundant? Could you provide experiments on such challenging data sets? (W1)\\n\\n2. Moreover, is there a mechanism in FastLSH to adaptively select informative dimensions during sampling? There are existing methods that adaptively sample the dimensions based on their informativeness, with a non-uniform distribution. (W1)\\n\\n3. Could you include a parameter study showing how different values of $m$ affect performance across various data sets? (W3)\\n\\n4. Furthermore, could you provide guidelines or heuristics on how to choose $m$ for a given application? (W3)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Thanks for your submission to ICLR.\\n\\nThe reviewers raised several concerns about the paper. In particular, the first, third, and fourth reviewers provided concerns about the theoretical guarantees and prior work of the paper. The authors responded to these concerns but the reviewers all followed up with further concerns, which were not addressed by the authors. The second reviewer also had concerns about the empirical study.\\n\\nBecause the reviewers ultimately still have concerns with the paper after the discussion phase, and none of the reviewers are advocating for the paper to be accepted, I cannot recommend the paper to be published at this time.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers converged in agreement to not accept the paper. They also responded to the author rebuttal with further questions which were not addressed by the authors.\"}", "{\"title\": \"Comment to Reviewer sEFP\", \"comment\": \"Thanks for your feedback. Next we will address the main points you raised.\\n\\n## Response to (W1)\\nWe are already aware of the issue you have raised. FastLSH can handle sparse data, as discussed in Appendix C.6. We use the MNIST dataset to validate this claim, and the results are shown in Table 9 in Appendix, where MNIST is a sparse dataset with around 2.6% non-zero elements. As you have focused on, for very sparse vectors, we can use the Hadamard transform to make the data dense. This makes FastLSH and AChash similar, so FastLSH can be considered a generalized version of AChash. In many practical applications, data is often dense, in which case FastLSH can be used directly. For sparse vectors, we apply the Hadamard transform for data preprocessing. This enhances the adaptability of FastLSH compared to AChash, enabling faster index construction and better query performance.\\n\\n\\n## Response to (W2)\\nIt is worth noting that FastLSH and ACHash [1] do not necessarily reduce query time; their primary purpose is to speed up the hash evaluations. More importantly, LSH can be applied to other tasks beyond ANN search, such as outlier detection and neural network training. For these tasks, only $k \\\\times L$ hash tables need to be built, and distance computations are not required. Additionally, these tasks might require frequent creation or updating of hash tables, such that the execution time is mainly dominated by the hashing cost. In such applications, FastLSH significantly reduces the hashing cost, as shown in Figure 3, Table 1, Table 6 and Table 7 in Appendix C.3.\\n\\n[1] Dasgupta, Anirban, Ravi Kumar, and Tam\\u00e1s Sarl\\u00f3s. Fast locality-sensitive hashing. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. 2011.\\n\\n## Response to (W3)\\nWe make a comprehensive analysis of $m$. The first result is illustrated in Theorem 4.6 and Corollary 4.7, where the asymptotic analysis of FastLSH indicates the equivalence between FastLSH and the classic LSH as $m$ approaches infinity. This analysis is aimed to show the relationship between our proposal and LSH from a theoretical perspective. \\n\\nIn practice, however, $m$ is expected to be a small number (less than $n$) to make FastLSH useful. In this case, the difference between FastLSH and LSH is controlled by $m$ and the variance in the squared distances of coordinates of a pair of data items, which is data-dependent and has nothing to do with $n$. \\n\\nOur second main result (Lemma 4.8 and Fact 4.9) quantitatively analyzed the difference and manifests that the gap between FastLSH and LSH can be captured by parameters $\\\\epsilon$ and $\\\\lambda$. In short, greater $m$ is, then smaller $\\\\epsilon$ and $\\\\lambda$ will be. Thus, by choosing appropriate $m$ ($m$ is set 30 in comparison with E2LSH), $\\\\epsilon$ and $\\\\lambda$ are small enough to reduce the impact of the variance in the squared distances of coordinates to a negligible level, which makes FastLSH and LSH are practically equivalent. Table 10 in Appendix C.8 illustrates the empirical evidences on how $m$ affects $\\\\epsilon$ and $\\\\lambda$ across 12 datasets we are experimented with.\\n\\nIn practice, the value of $m$ can be set in the range of $[30, \\\\frac{n}{2}]$. FastLSH provides the default $m$ settings, as shown in Parameter Settings in Appendix C.1, C.2, C.3 and C.4.\"}", "{\"title\": \"Continued Comment to Reviewer yqFQ\", \"comment\": \"## Response to W2.2\\nThe validity of Theorem 4.6 is conditional, i.e., $|x|\\\\leq O(m^{-1/2})$. Theorem 4.6 implies that $\\\\varphi_{\\\\tilde{s}X}(x)$ is asymptotically identical to $\\\\exp(-\\\\frac{ms^{2}x^{2}}{2n})$ within interval $[-\\\\mathcal{K}\\\\sqrt{\\\\frac{n}{ms^{2}}}, +\\\\mathcal{K}\\\\sqrt{\\\\frac{n}{ms^{2}}}]$ (i.e., $|x|\\\\leq O(m^{-1/2})$), that is, 2$\\\\mathcal{K}$ ``standard deviations\\\", where $\\\\mathcal{K}$ is an arbitrarily large constant. The goal is that when $\\\\mathcal{K}$ is sufficiently large, the effective components of the distributions of FastLSH and E2LSH are essentially identical. What we are proving here is the equivalence under the condition of $x$, not the equivalence for all values of $x$. Since we introduced the data-dependent quantity $\\\\sigma$ in Lemma 4.1, which makes FastLSH a data-dependent LSH, it is intractable for FastLSH to achieve the same level of perfect theoretical analysis as traditional data-independent LSH (E2LSH). For this issue, we will add a clearer description to specify the conditions under which Theorem 4.6 holds.\\n\\n\\n## Response to W2.3 \\nIn practical scenarios, $m$ is often limited. We study the relation between $f_{\\\\tilde{s}X}(t)$ and the PDF of $\\\\mathcal{N}(0,\\\\frac{ms^{2}}{n})$ when $m$ is relatively small ($m < n$). Particularly, we derive the first four moments of $\\\\tilde{s}X$ and $\\\\mathcal{N}(0,\\\\frac{ms^{2}}{n})$, and analyze how $m$ and $\\\\sigma$ affect their similarity. While in general the first four moments, or even the whole moment sequence may not determine a distribution [1], practitioners find that distributions near the normal can be decided very well given the first four moments [2,3]. To this end, we derive Lemma 4.8 rigorously, which is not merely a heuristic argument. \\n\\nThen we quantitatively analyzed the difference and manifests that the gap between FastLSH and LSH can be captured by parameters $\\\\epsilon$ and $\\\\lambda$ (Lemma 4.8 and Fact 4.9). In short, greater $m$ is, then smaller $\\\\epsilon$ and $\\\\lambda$ will be. Thus, by choosing appropriate $m$ ($m$ is set 30 in comparison with E2LSH), $\\\\epsilon$ and $\\\\lambda$ are small enough to reduce the impact of the variance in the squared distances of coordinates to a negligible level, which makes FastLSH and LSH are practically equivalent. Table 10 in Appendix C.8 illustrates the empirical evidences on how $m$ affects $\\\\epsilon$ and $\\\\lambda$ across 12 datasets we are experimented with. Furthermore, Figure 8 and Figure 9 in Appendix C.7 illustrate a comparison of the $\\\\rho$ curves (an important measure of the LSH property) for E2LSH and FastLSH. These figures show that their $\\\\rho$ curves match well across different datasets, verifying that FastLSH and E2LSH have the same LSH performance. As stated in W2.2, FastLSH is a data-dependent LSH, due to the introduced data-dependent number $\\\\sigma$, A more rigorous theoretical analysis is intractable. To the best of our knowledge, there is currently no data-dependent LSH that provides LSH property similar to data-independent LSH (E2LSH).\\n\\n[1] Lin, G. D. Recent developments on the moment problem. Journal of Statistical Distributions and Applications, 4 (1):5, 2017.\\n\\n[2] Leslie, D. Determination of parameters in the johnson system of probability distributions. Biometrika, 46(1/2): 229\\u2013231, 1959.\\n\\n[3] Ramberg, J. S., Dudewicz, E. J., Tadikamalla, P. R., and Mykytka, E. F. A probability distribution and its uses in f itting data. Technometrics, 21(2):201\\u2013214, 1979.\\n\\n\\n## Response to W3.1\\nThe SLIDE framework effectively demonstrates that FastLSH provides significant acceleration for frequently constructing hash tables while also improving query accuracy, even though it does not handle streaming data. Lemma 4.8 and Fact 4.9 provide an analysis of the distributional difference between FastLSH and E2LSH. By adjusting the size of $m<n$, FastLSH can asymptotically become equivalent to E2LSH. If E2LSH effectively handles streaming data, we believe FastLSH is also applicable. Regarding the references [2, 3] you mentioned, if time permits, we will add comparative experiments to further show the broad applicability of FastLSH. However, could you clarify which two papers you are referring to with [2, 3]?\"}" ] }
BvMuyqPvk1
Ensemble and Mixture-of-Experts DeepONets For Operator Learning
[ "Ramansh Sharma", "Varun Shankar" ]
We present a novel deep operator network (DeepONet) architecture for operator learning, the ensemble DeepONet, that allows for enriching the trunk network of a single DeepONet with multiple distinct trunk networks. This trunk enrichment allows for greater expressivity and generalization capabilities over a range of operator learning problems. We also present a spatial mixture-of-experts (MoE) DeepONet trunk network architecture that utilizes a partition-of-unity (PoU) approximation to promote spatial locality and model sparsity in the operator learning problem. We first prove that both the ensemble and PoU-MoE DeepONets are universal approximators. We then demonstrate that ensemble DeepONets containing a trunk ensemble of a standard trunk, the PoU-MoE trunk, and/or a proper orthogonal decomposition (POD) trunk can achieve 2-4x lower relative $\ell_2$ errors than standard DeepONets and POD-DeepONets on both standard and challenging new operator learning problems involving partial differential equations (PDEs) in two and three dimensions. Our new PoU-MoE formulation provides a natural way to incorporate spatial locality and model sparsity into any neural network architecture, while our new ensemble DeepONet provides a powerful and general framework for incorporating basis enrichment in scientific machine learning architectures for operator learning.
[ "scientific machine learning", "basis enrichment", "DeepONet", "neural operators", "operator learning", "sparse methods" ]
https://openreview.net/pdf?id=BvMuyqPvk1
https://openreview.net/forum?id=BvMuyqPvk1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "q417qNYhF6", "idualNO6bA", "cvHgbFz7eo", "4ePqcojwKp" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731460515583, 1730590663498, 1729623883705, 1730137674330 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5053/Authors" ], [ "ICLR.cc/2025/Conference/Submission5053/Reviewer_iEnf" ], [ "ICLR.cc/2025/Conference/Submission5053/Reviewer_diRF" ], [ "ICLR.cc/2025/Conference/Submission5053/Reviewer_okGg" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper applies classical mixture-of-expert paradigm to learn mathematical operators. By incorporating different experts, and therefore enhancing basis representation, the resulting network has stronger learning power.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Integrating the mixture-of-experts paradigm into operator learning enhances the model's capacity for effectively learning operators.\\n2. Building on the mixture-of-experts approach, the authors introduce a partition-of-utility strategy to encourage spatial locality and promote model sparsity.\\n3. A MoE model enhances the model's capacity by incorporating a diverse set of basis functions, which enables improved approximation accuracy across various operator learning tasks.\\n4. Comprehensive ablation study regarding key design factors of the MoE framework.\", \"weaknesses\": \"While the paper claims to present a novel framework combining classical expert neural networks, it largely repackages existing concepts rather than introducing groundbreaking ideas. The mixture-of-experts (MoE) paradigm is a well-established approach in machine learning, although innovative when applied in scientific ML, does not fundamentally transform the field. The combination of these elements lacks a compelling new problem formulation or a significant shift in methodology. The quality of the paper is undermined by several factors. There is a lack of critical evaluation of the scenarios where the QE-MoE approach may fail or perform suboptimally, especially concerning the boundary learning. While distributing data spatially to different expert models could potential relief the learning complexity compared to learning the data globally, it also introduces significant complexities depending on number of mixtures and data partitioning strategy. The experimental results do not convincingly demonstrate a substantial improvement over state-of-the-art methods but a comparison over several of its MoE variants.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"no.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a spatial mixture-of-experts (MoE) DeepONet architecture that utilizes partition-of-unity (PoU) approximation to combine expert networks across overlapping spatial patches. By integrating this localized approach with proper orthogonal decomposition (POD), the authors achieve 2-4x lower errors compared to standard DeepONets on several PDE problems. Besides, the authors also provide theoretical guarantees for the proposed architecture through universal approximation theorems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents its technical content in a clear, organized manner.\\n\\n2. The integration of partition-of-unity principles into the DeepONet framework represents an interesting approach to incorporating spatial locality.\\n\\n3. The work combines theoretical analysis (universal approximation theorems) with systematic empirical validation.\\n\\n4. The proposed method demonstrates consistent performance improvements over standard DeepONets across multiple PDE examples.\", \"weaknesses\": \"1. The experimental comparisons are primarily focused on DeepONet variants, omitting comparisons with other popular neural operators like FNO, which would provide broader context for the method's effectiveness.\\n\\n2. The time-dependent PDE examples are restricted to single-step predictions (from one time point to another), leaving open questions about the method's capability to learn full temporal trajectories when time coordinates are included in the trunk network inputs.\\n\\n3. The dependence on predefined partitions may limit the method's flexibility and generalizability, particularly for problems where optimal partition locations are not known a priori.\", \"questions\": \"1. How sensitive is the method's performance to the number of partitions? It would be valuable to see an ablation study on this hyperparameter.\\n\\n2. Given that POD bases are computed from discretized output functions, how does the method handle evaluation at arbitrary points y not present in the training discretization?\\n\\n3. How well does the method generalize to learning mappings from initial conditions to full spatiotemporal solutions, rather than just single-time predictions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces an innovative enhancement for DeepONets, integrating multiple trunk networks to boost expressivity and generalization. It proposes a partition-of-unity mixture-of-experts (PoU-MoE) trunk structure to promote spatial locality, offering a refined approach to operator learning. Theoretical guarantees and extensive experiments across various PDE problems reveal that ensemble DeepONets, particularly the POD-PoU variant, achieve error reductions of 2-4x compared to standard DeepONets. This work offers valuable insights into effective trunk configurations and highlights a promising direction for advancing operator learning, albeit with increased computational requirements.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The partition-of-unity mixture-of-experts (PoU-MoE) trunk introduces a novel approach that enhances spatial locality and promotes model sparsity.\", \"As reported, the ensemble DeepONets, particularly the POD-PoU variant, achieve substantial error reductions (2-4x) compared to standard DeepONets.\", \"Universal approximation capabilities are analyzed.\"], \"weaknesses\": [\"The presentation could be improved. For instance, a clear description and detailed experimental setup for each baseline in Table 1 should be provided.\", \"The scope of this work appears somewhat limited, as it primarily focuses on testing enrichment strategies for basis functions within the specific context of operator learning. Applying these strategies to other popular frameworks is not trivial. Although the authors suggest that these methods might extend to FNO, sufficient details and evidence to substantiate this claim are lacking (Appendix B is not convincing enough).\", \"No comparison with other popular frameworks e.g. CNO [1], FNO, variants of FNO, etc.. Although this is also partially due to the fact that the scope of this work is limited to DeepONet.\", \"While DeepONet is among the most well-known neural operators with clearly identifiable basis functions (the trunks), exploring additional works could help assess the generalizability of the proposed frameworks, such as [2] (treating the INRs/SIREN as producing basis functions as in DeepONet) .\", \"[1] Convolutional Neural Operators for robust and accurate learning of PDEs; Bogdan Raoni\\u0107, Roberto Molinaro, Tim De Ryck, Tobias Rohner, Francesca Bartolucci, Rima Alaifari, Siddhartha Mishra, Emmanuel de B\\u00e9zenac; 2023\", \"[2] Operator Learning with Neural Fields: Tackling PDEs on General Geometries; Louis Serrano, Lise Le Boudec, Armand Kassa\\u00ef Koupa\\u00ef, Thomas X Wang, Yuan Yin, Jean-No\\u00ebl Vittaut, Patrick Gallinari; 2024\"], \"questions\": \"1. Can you clearly explain what each of these baselines are in Table 1?\\n2. The choice of testing datasets are not so conventional, how does it perform on e.g. NS from Li et. 2020?\\n3. Why is (P + 1)-Vanilla MUCH worse than Vanilla (ours)? To my understanding, you simply add a layer to (P + 1)? Then with residual connection, it should be at least similar to Vanilla? Is there no residuals in your network?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BuBBRn0zFD
Automated Discovery of Pairwise Interactions from Unstructured Data
[ "Zuheng Xu", "Moksh Jain", "Alisandra Kaye Denton", "Shawn T. Whitfield", "Aniket Rajiv Didolkar", "Berton Earnshaw", "Jason Hartford" ]
Pairwise interactions between perturbations to a system can provide evidence for the causal dependencies of the underlying underlying mechanisms of a system. When observations are low dimensional, hand crafted measurements, detecting interactions amounts to simple statistical tests, but it is not obvious how to detect interactions between perturbations affecting latent variables. We derive two interaction tests that are based on pairwise interventions, and show how these tests can be integrated into an active learning pipeline to efficiently discover pairwise interactions between perturbations. We illustrate the value of these tests in the context of biology, where pairwise perturbation experiments are frequently used to reveal interactions that are not observable from any single perturbation. Our tests can be run on unstructured data, such as the pixels in an image, which enables a more general notion of interaction than typical cell viability experiments, and can be run on cheaper experimental assays. We validate on several synthetic and real biological experiments that our tests are able to identify interacting pairs effectively. We evaluate our approach on a real biological experiment where we knocked out 50 pairs of genes and measured the effect with microscopy images. We show that we are able to recover significantly more known biological interactions than random search and standard active learning baselines.
[ "causal independence testing", "representation learning", "active learning" ]
Reject
https://openreview.net/pdf?id=BuBBRn0zFD
https://openreview.net/forum?id=BuBBRn0zFD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uEPYvzZwCb", "tayQoHPrjS", "rOd6FMKTBC", "r2E91ZjlqS", "qajViAfvhT", "oAfMoYadbs", "o243m2CMmA", "jAFJVeRCvF", "dsQm02jh1Z", "Zozdamke97", "VHNyuQ9Ft8", "SZB1sakQ9T", "RX7PvBXsZQ", "NIGHA2mjYc", "LSDOt0EC34", "KPXs5QVOpg", "H2GbT5Xq6U", "Cf9xaPZ2RT", "AsDywVXH4M", "Ajz8rf8X3H", "69ibekYylH", "64PTMx7t2m", "5jJ0pBoo6z", "2oUSo1uqvP", "1dEpZWsm8K", "1aCf5lLuKA", "02IXLzpTS3" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734520101552, 1732158006104, 1732580046492, 1732651252656, 1732289125516, 1729832010255, 1732158078143, 1732580488203, 1732617796894, 1732246736944, 1732157422490, 1732157465358, 1732157977875, 1732579443413, 1732581178084, 1730760759637, 1737523838117, 1730596901619, 1732157741494, 1732580456184, 1732158114360, 1732157854964, 1730310361308, 1732561057924, 1732158168170, 1732579835526, 1733075212237 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7425/Area_Chair_WWZR" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Reviewer_GVrM" ], [ "ICLR.cc/2025/Conference/Submission7425/Reviewer_hnXh" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Reviewer_1aqQ" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Reviewer_GVrM" ], [ "ICLR.cc/2025/Conference/Submission7425/Reviewer_GVrM" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7425/Reviewer_1aqQ" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Reviewer_B1Jx" ], [ "ICLR.cc/2025/Conference/Submission7425/Reviewer_hnXh" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Authors" ], [ "ICLR.cc/2025/Conference/Submission7425/Reviewer_hnXh" ] ], "structured_content_str": [ "{\"metareview\": \"An interesting paper, which unfortunately does not pass the bar for acceptance at ICLR. Substantial points of disagreements still coexisted with 1aqQ at the end of the review process, and while I can buy the authors' argument for point (1), I believe a bit more could have been done for points (2-3). I can only recommend that the authors polish further their paper for a next round of submission.\", \"additional_comments_on_reviewer_discussion\": \"Substantial discussions with reviewer 1aqQ, in particular on the novelty of the framework and assumptions made, during rebuttal time.\"}", "{\"comment\": \"> Have you conducted any validation beyond the synthetic lethality example? As the title suggests, this paper aims to propose a general method for testing \\\"interaction effects,\\\" but with no examples beyond synthetic lethality, it might be more suitable to focus claims on identifying biological \\\"epistasis.\\\"\\n\\nFirst, we\\u2019d like to clarify that our gene-gene interaction example is not limited to synthetic lethality. The two testing procedures aim to detect general interactions defined by violations of separability and disjointedness. Biological interactions that do not align with these definitions are not expected to be identified by our approach. It occurs that many of the gene-gene interactions identified in our experiments fall under synthetic lethality relationships, suggesting that this relationship is well captured by the two probabilistic models of pairwise interactions. The challenge of going beyond synthetic lethality is primarily in finding good sources of known biological relationships (though epistasis is a great suggestion). Pairiwise morphological interactions are not well characterized in the biology literature because without automated dection methods like the one we propose, going beyond cell viability requires a human to select & measure a morphological target of interest.\\n\\nWe did however validate both testing procedures on several synthetic datasets in Section 5.1 and tested the separability of interactions between multiple CRISPR guides targeting the genes TSC2 and MTOR (see the left panel of Figure 3).\\n\\n> For instance, when generating test cases in software development, identifying bugs that occur only under certain condition combinations could fit within this paper\\u2019s defined \\\"interaction effect.\\\" A system might function correctly when condition 1=True or condition 2=True, but malfunction when both are true. Do you think this method is also effective for such identification? Specifically, what is the rationale for identifying this condition combination as promising without observing the behavior under condition 1=True and condition 2=True? Would this not be challenging in black-box testing?\\n\\nThis sounds like an interesting application! While black-box testing for software bugs falls outside our area of expertise, if the software output measurements can be modelled using either the separability or disjointedness framework, we believe our approach could be applicable.\\n\\nThank you again for your comments. We are happy to answer any further questions you have during the discussion period.\"}", "{\"title\": \"Thank you for your follow-up comments! Please see our response below.\", \"comment\": \"> (1) Lack of comparisons with other methods\\nThe response to this issue remains unconvincing. I do not believe it is impossible to compare your method with others. ...\\n\\nThank you for the followup. This seems to be our key point of disagreement: one should *never* compare statistical tests with two *different* null hypotheses with respect to how the tests perform on a single dataset, because then you are jointly testing the utility of the respective null hypothesis and the statstical efficiency of a test, and your results only apply to *that* dataset. \\n\\nThe problem with the procedure that you outline is that the null hypothesis of standard tests are typically defined with respect to your representation. For example, if $z = h(x)$ is your \\\"fixed-dimensional multivariate data\\\", then the null hypothesis is implicitly defined with respect to $h$. Our tests are **non-parametric** statements about the relationship between distributional assumptions and their associated null hypotheses (i.e. they do not depend on $h$ beyond ruling out any $h$ that throws away information). Any comparison between a test that depends on a particular choice of $h$ and one that is agnostic to $h$ (i.e. our non-parametric tests) is attempting to compare the outcome of two tests that have *two different null hypotheses*, which will be deeply misleading in general. \\n\\nWhile you can use data to compare the statistical efficiency of a set of tests that have the same null hypothesis, one should *never* choose a null hypothesis itself as a function of what works best on one particular dataset because that can lead to conclusions that only hold for that particular data and embedding function. This is especially misleading when one test explicitly depends on the choice of embedding function. For example if we were to discover [standard test x] on an autoencoder's embeddings leads to better discoveries of known biology, all we could conclude is that this is true for *this dataset* and *this autoencoder* (standard autoencoder embeddings are not identified, so the conclusion may even change between random seeds of the autoencoder's training). By constrast our theory is general: it gives explicit well-defined notions of depences that do not depend on your choice of representation and explains how these can be tested. The biological application serves as an instance of a class of interaction testing problems that we could (in principal) solve with this method.\\n\\nTo be clear, we are not at all opposed to using existing nonparametric tests: in section 3.2 we use Maximum Mean Discrepancy (MMD) tests to test disjointness, and we are currently working on followup work that tests separability using Fisher Divergences and Kernalized Stein Discrepancies [Liu et al 2016]. All of these tests are comparible because they test the same null hypothesis (i.e. theorem 3.6).\\n\\nRegarding the statistical interaction tests mentioned by the reviewer, we are unsure which specific methods are being referred to. As previously noted, the multivariate factor analysis-based tests suggested, such as those in Berrington de Gonz\\u00e1lez and Cox [2007], are not applicable to our setting. These methods require **scalar response variables**, where relationships between the response and factors (various treatments) are modeled using linear or generalized linear regressions. In contrast, our setting involves **multivariate response variables**, rendering those methods inapplicable. I.e. in a standard regression of the form $y = f(x_1, x_2)$ we measure a high dimension $y$ not a high-dimensional $x$. The only two recent papers that we can find that apply to high dimensional responses are \\\"Sparse Learning and Structure Identification for Ultrahigh-Dimensional Image-on-Scalar Regression\\\" [Li et al 2021] and \\\"Interaction pursuit in high-dimensional multi-response regression via distance correlation\\\" [Kong et al. 2022], neither of which apply to our setting because they both have strong linearity assumptions (but both emphasize the same point we make, which is that this area has been understudied). \\n\\nMoreover, we'd like to clarify that having a fixed-dimensional observation does not necessarily imply that it is *structured* with respect to the interventions. In many cases, the influence of interventions is entangled, inducing distribution shifts across all dimensions. A structured transformation would ideally disentangle the measurements into a set of causal latent variables, enabling direct observation of dependencies between two single perturbations among the inferred latents. In contrast, our framework does not rely on such disentanglement (see our discussion in the related work section and lines 192\\u2013193).\"}", "{\"title\": \"Manuscript updated\", \"comment\": \"Dear Reviewers,\\n\\nWe thank you again for the engaging discussions and thoughtful feedback on our work. We have now updated our submission to reflect the dicussions.\"}", "{\"title\": \"Thank you for the response, a few follow-ups\", \"comment\": \"Thank you for your response. I have a few follow up questions:\\n1. I am slightly confused about the notion \\\"if you're accurately predicting all the pairs you think are most likely to interact, then there probably are no interactions left in the matrix.\\\" You claim that if you are accurately predicting the loss $\\\\frac{1}{|B|}\\\\sum_{i,j \\\\in B}|\\\\vec{h}_{i,j} - (\\\\vec{h}_i +\\\\vec{h}_j)|$, you have likely found most of the pairwise interactions. My confusion is regarding this claim. Suppose during round 0 (i.e., before any data acquisition) your predictor/posterior is perfectly calibrated. In that case, we'd accurately predict the loss for most pairs, so would we stop before any data acquisition? On the other hand, if the posterior is very poorly calibrated, your false discovery rate would be very high. I am unsure how to think of the tradeoff between these things. Could you clarify?\\n\\n2. Thank you for the pointer to Celik et al 2024 on microscopy data! It's interesting to see this applied to gene perturbations.\\n\\n3. Thank you for engaging with my concern on how often the product of the marginals differs from the joint. I agree that Ahlmann-Eltze et al [2024] is concurrent and thus this paper cannot be judged against that one. However, my broader issue is how necessary it is to model non-linear interactions of perturbation effects. Specifically, I'd like to see a quantitative estimate (or an argument for why such an estimate can't be derived) for how often the model diverges from the linear model. This is an important question, even if the linear model is a special case of your model, as if, say, 99% of interactions are linear, then a linear model could be a strong \\\"inductive bias\\\" that helps more efficiently discover pairwise interactions. The authors state that \\\"we focus on detecting when this double perturbation result cannot be recovered from adding single perturbations,\\\" but I still don't understand how often this occurs. If this can be clarified, and if the reviewers can run this baseline of reveal ing$\\\\delta_{ij}$ on the pair $i,j$ for which $\\\\delta_i,\\\\delta_j$ are are marginally most different from $\\\\delta_0$, then I'd be willing to raise my score.\"}", "{\"summary\": \"The authors propose two test statistics to measure the separability and disjointness between pairs of perturbations. Perturbations are separable if they affect disjoint subsets of latent variables, and are disjoint if they affect disjoint parts of the observation. Their separability metric is validated on two synthetic datasets where the ground-truth data generating process is known (i.e. the graph of the perturbation and latent variables), as well as on a real cell painting dataset. They also show that the disjointness metric can be used in an active learning context on cell painting data to better uncover perturbations with pairwise interactions relative to several baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors tackle the important problem of determining whether it is worthwhile to run a costly experiment where two perturbations are applied simultaneously.\", \"The evaluation is thorough, and demonstrates utility on a real-world problem.\"], \"weaknesses\": [\"This is an applied paper, and I believe the novelty is weak.\", \"The separability metric essentially measures the mutual information between the latent variables corresponding to a pair of perturbations. There are many different ways to instantiate this, and it's unclear whether the authors' proposed metric itself is a valuable contribution.\", \"A similar statement also holds for the disjointness metric.\", \"Essentially, this paper combines many existing methods to achieve an empirical (and somewhat narrow) goal. I believe this is a valuable contribution, but it may be suited for a more applied venue.\"], \"questions\": [\"Can you counter my points (in weaknesses) regarding the lack of novelty in this work?\"], \"minor_points\": [\"The right-hand side of the bottom equation on p.4 seems off; isn't $p_{Z_i}$ a distribution over a scalar random variable, and the argument $g^{-1}(x)$ is a vector?\", \"I think the clarity can be improved (and some space saved) if you apply a log to both sides of Equation (1) and go directly to Equation (3), without writing Equation (2).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your comments on our work. Please find our responses below.\\n\\n> It is not quantified how much the biological experiments benefit from the active learning algorithm in terms of the total number of necessary perturbations ...\\n\\nAs illustrated in Figure 6, within 50 batches (i.e. 500 pairs of the possible 1225) we are able to discover 12-15% more *known* interactions than random selection. If one can run all the possible pairs, you would discover all the interaction effects. However, our focus is on efficiently discovering these interactions, which we demonstrate -- 46% interactions discovered in 40% of the total interactions. \\n\\n> I am not sure if it possible but it would make sense to additionally compare the proposed method against causal discovery methods ...\\n\\nAs mentioned in the comment, causal discovery methods are not typically suitable for our problem because it assumes all variables that causally discribes the response are observed, which is not viable in our setting. Our goal is to develop techniques that apply to *unstructured data* so we do not allow the possibility of \\\"transforming the data into a format that contains a single value for every node\\\".\\n\\nCausal representation learning is more aligned with our setting. However, as we discussed comprehensively in the related work section, these CRL works all attempt to disentangle unstructured observations into latent variables, and would solve our problem if successful (one could directly observe dependencies among the inferred latents). But they depend on assumptions which are at least as strong as the assumptions that we make here, and if either the assumptions that they make or the estimation procedures they use (i.e. fitting the relevant deep nets) fail, then any conclusions will not be valid. The challenge is that it is very difficult to know whether disentanglement is successful: the assumptions are untestable, and there is no validation set metric that tells you whether you have successfully disentangled latent variables (if there was, then by definition, there would not be an identifiability problem) which makes it very difficult to reliably tune hyperparameters.\\n\\nBy contrast, our interaction tests jointly test both the data generating process assumptions (which you don't want to test) and the independence assumptions (which you do). Our assumptions on the data generating process are comparatively mild, but even if they are incorrect, this will just result in false positives on the test rather than a failure of the inference procedure. Obviously we would prefer to avoid false positives, but in practice, a test that detects real interactions with a high false positive rate is still useful for biologists because it narrows the search space to a relatively small set of candidates.\\n\\n> By pairwise interactions do you mean causal dependencies or correlations? Can you also discover causal relations?\\n\\nAs mentioned in our response to Reviewer 1aqQ, we defined two notions of interactions: violations of separability and disjointedness. While these concepts are related to causal dependencies (particularly the separability), they are not equivalent. Specifically, within the separability framework, interactions are defined as instances where two perturbations target the same latent variables. Similar concepts have been studied in the context of causal representation learning; see [1] for reference.\\n\\n[1] Score-based Causal Representation Learning: Linear and General Transformations\\n\\n\\n> What do you mean by \\\"unstructured data\\\"? I would say that an image is structured as a grid of pixels.\\n\\nUnstructured data refers to measurements that do not directly reveal a specific pre-selected property. For example, if we are interested in cell viability, a structured measurement would be the number of healthy cells in a sample, whereas cell images---considered unstructured measurements---require additional processing to extract this information. For a more detailed explanation, please refer to lines 74\\u201384.\\n\\n\\n> Line 094 please elaborate on what you mean \\\"by this notion of interaction\\\".\", \"we_rigorously_define_the_two_types_of_perturbation_interactions\": \"violations of separability (Definition 3.5) and violations of disjointedness (Definition 3.7).\\n\\n> Line 374 what are 3 dimensional tabular data? Please provide the dimensions.\\n\\nThe data in this example is in $\\\\mathbb{R}^3$. The precise data generating process is provided in Appendix E.1.\\n\\n> Line 478 \\\"Genes from the same pathway are ordered adjacently\\\". Please rephrase/explain what it means.\\n\\nThis means that in the 2x2 matrix in Figure 5, genes from the same biological pathways are placed in adjacent rows (and columns). For the indexing of the selected genes and their corresponding pathways, please refer to Table 1 on page 24.\"}", "{\"title\": \"To all reviewers\", \"comment\": \"Dear Reviewers,\\n\\nWe thank you again for the thoughtful feedback on our work. We are working on incorporating changes based on the discussion in an updated version of the paper and hope to post it as soon as possible prior to the deadline.\"}", "{\"comment\": \"Thanks for following up so quickly!\\n\\n> However, in your setting is a continuous density, so I am unsure what you mean by a \\\"calibrated\\\" posterior in the language of, e.g., Guo 2017. You seem to hint at it with your comment on \\\"outputs a discrete distribution over a sufficiently fine gained binning of rewards even though they are really continuous.\\\" Could you please clarify this comment? I understand the general intuition, however, that calibration is a weak condition and that a calibrated model could be inaccurate on individual instances, so calibration doesn't necessarily imply stopping at step 0.\\n\\nThe most direct analog for continuous random variables would be to check calibration of the cumulative distribution function, i.e. $P(Y < x| \\\\hat{F}(x) = p) = p\\\\quad \\\\forall p\\\\in[0,1], x\\\\in R$, but that is a far stronger condition than we need. In practice, if one cares about stopping, it would make sense to discretize $\\\\mathbf{R}$ with a threshold, $\\\\tau$, below which you deem the additive model to be \\\"accurate\\\". I.e. redefine $\\\\mathbf{R}$ as a binary matrix with $R_{i, j} = \\\\mathbf{1}( |h_{i,j}-h_i-h_j| < \\\\tau)$ [those should be \\\\vec{h} but the \\\\vec command is causing formatting errors]. In this case, the calibration requirement is simply that there is no $i, j$ for which $P(R_{i, j} = 1)$ is small (i.e. the model is confident that there will be no interaction), when in fact $i$ and $j$ do interact. As mentioned above - this is not part of this work's contributions so we are focusing on intuition rather than formalism (this argument is adapted from what we've done in followup work), but we hope this discussion shows that it is in principle possible to derive a stopping criterion in this framework.\"}", "{\"comment\": \"Thank you for your detailed response. I believe I now understand the main points of disagreement, but let me further make sure three key points as below.\\n\\nRating wise, since we are reviewing the submitted manuscript, not any unseen revised versions, I keep the scores as is. \\n\\n### (1) Lack of comparisons with other methods \\n\\nThe response to this issue remains unconvincing. I do not believe it is impossible to compare your method with others. \\n\\nWhile your paper refers to \\u201cunstructured data,\\u201d it ultimately converts image features into embedded vectors using a pre-trained model\\u2014essentially transforming them into fixed-dimensional multivariate data\\u2014which are then used for interaction testing. As other reviewers may have pointed out, statistical interaction tests can be applied to multivariate data. Hence, a lack of comparisons with alternative methods is not justified.\\n\\n### (2) Identifying pairs with interaction effects \\u201cwithout directly measuring pairwise effects\\u201d \\n\\nI guess that the \\u201cAutomated discovery\\u201d in the title refers to the explanation in L95 of the paper: \\\"We can search the space of pairwise experiments by selecting **pairs of perturbations that are likely to result in large test statistics**. In doing so, we reduce the problem of finding interacting pairs of perturbations into an active matrix completion problem.\\\"\\n\\nIf I understand correctly, this means predicting test statistics for **unmeasured pairs (k, l)** based on observed perturbation results for other pairs (i, j). This would imply predicting pairwise effects for (k, l) **without directly measuring their double perturbation effects**, wouldn't it? This is the point I find the most difficult to fully accept regarding the validity of the setup. \\n\\nFrom the author response, I guess that this is possible due to assumptions employed in an existing work [1], such as the matrix being low-rank and columns following a Gaussian distribution. These assumptions are briefly mentioned in L349 of your manuscript. But [1] would not specifically target the one of this paper, i.e., pairwise interactions like synthetic lethality, simply relying on existing assumptions should require careful validation and discussion within the paper. Are these assumptions not something that can be derived from Assumptions 3.1, 3.2, and 3.4, but rather supported only by the empirical fact that the experimental results seem to work well? The paper feels theoretically solid in parts, but the critical sections come across as very loose.\\n\\nThis gives the impression that this paper just borrows the established framework [1] for \\\"automated discovery\\\" without fully assessing whether its technical assumptions (e.g., low-rank matrix structure, Gaussian-distributed columns) are appropriate for this paper's research goal. From its definition, the pairwise effects this paper targets are seemingly hard to predict by partial observations of other pairs, and thus careful validation and discussion of these assumptions seem necessary.\\n\\n[1] Adaptive sampling for discovery. NeurIPS 2022, 35, 1114\\u20131126. \\n\\n### (3) \\\"domain knowledge and modeling of underlying structures are essential\\\"\\n \\nMy comment on this point might have been confusing. I did not mean to refer to any conclusions from a specific study. Rather, I meant that prior research on detecting epistasis (or synthetic lethality as a specific case) often incorporates additional information, such as PPI, GO, pathway data, or knowledge graphs. See the following, for example.\\n\\n- [2] Benchmarking machine learning methods for synthetic lethality prediction in cancer. Nat Commun 15, 9058 (2024). https://doi.org/10.1038/s41467-024-52900-7 \\n- [3] Discovery of synthetic lethal interactions from large-scale pan-cancer perturbation screens. Nat Commun 13, 7748 (2022). https://doi.org/10.1038/s41467-022-35378-z \\n\\nThis is because from the definition, synthetic lethality is hard to identify only from the indirect observations. Predictions therefore usually require additional assumptions or information.\\n\\nIn this sense, the additional assumptions in this paper would be the one from [1], i.e. low-rank matrix structure, Gaussian-distributed columns. But this point should have been more carefully verified and discussed in the paper. Plus, further validation, discussion, and comparisons with existing methods seem necessary. For reference, [2] includes benchmarking of three matrix factorization-based methods (SL2MF[4], CMFW[5], GRSMF[6]). \\n\\n- [4] SL2MF: Predicting synthetic lethality in human cancers via logistic matrix factorization. IEEE/ACM Trans. Comput. Biol. Bioinform. 17, 748\\u2013757 (2020). \\n- [5] Predicting synthetic lethal interactions using heterogeneous data sources. Bioinformatics 36, 2209\\u20132216 (2020). \\n- [6] Predicting synthetic lethal interactions in human cancers using graph-regularized self-representative matrix factorization. BMC Bioinform. 20, 1\\u20138 (2019).\"}", "{\"comment\": \"We thank you for the helpful feedback on our work. Please find our responses below.\\n\\n> The use of a statistical test for matrix completion is a little poorly motivated. In particular, given an n x n matrix completion problem, at least one pair is likely to have a large test statistic ...\\n\\nThis is a great question and worth discussing in more detail. Our original motivation for working on this problem was thinking about exploring, e.g. all pairwise knockouts of the human genome (approx. 20 000 genes, so at least 200 million experiments). In spaces this large, you're always bounded by budgets and so optimal stopping is not a concern. \\nThat said, we have followup work to this which considers the problem of how to make single perturbation experimentation more efficient [no citation; currently under review]. In both this paper and that work, our acquisition function essentially optimizes for examples where we are likely to make incorrect predictions (under the disjointness statistic; see section 3.2). Because of this, if our posterior over $\\\\mathbf{R}$ is calibrated (this can be relaxed to weaker notions of calibration), then for each batch of samples, $B$, that you select at each round, the average loss over the batch, $\\\\frac{1}{|B|}\\\\sum_{i,j \\\\in B}\\\\|\\\\vec{h}_{i,j} - (\\\\vec{h}_i +\\\\vec{h}_j)\\\\|$ is a upper bound on the average loss of IID samples from $\\\\mathbf{R}$, and hence can be used to construct a explicit stopping criterion. Intuitively, this can be thought of as, \\\"if you're accurately predicting all the pairs you think are *most likely to interact*, then there probably are no interactions left in the matrix.\\\" For this approach to work, you need large enough batch sizes, but in practice batch sizes tend to be large to keep experimental costs per sample low. \\n\\n\\n> Second, I think the authors did not explain why one would use imaging data for this matrix completion problem, instead of the more structured RNAseq data.\\n\\nAs explained in lines 74\\u201384, unstructured measurements, such as cell microscopy images, can be acquired through automated high-throughput perturbation platforms, making them a significantly cheaper alternative to structured measurements like RNA-seq. While cell images may provide less precise signals for a specific pre-selected properties, they retain rich biological information [see Celik et al. 2024 for a comparision showing the similarity between RNA-seq and microscopy data]. The challenge, of course, is that we need methods that are able to work with unstructured data directly (rather than rely on preprocessing or specially designed assays). To our knowledge, this is the first work the provides a rigorous approach to achieving this goal of detecting interactions from unstructured data. **As such, we view the interaction tests as our primary contribution.** We performed inference on these unstructured measurements (microscopy images) to demonstrate that these test work even in the extremely challenging real world setting of microscopy imaging. \\n\\n> Third, I did not get a sense of how often the joint distribution differs from the product of the marginals in this setting...\\n\\nThank you for bringing up this very relevant work. Ahlmann-Eltze et al [2024] is concurrent work (note that it was published the week before the ICLR abstract deadline), but they arrive at a very similar conclusion to our original experiments that motivated our disjointness test. The additive model that outperforms the deep learning based approaches in Ahlmann-Eltze et al [2024] is, $x_{i,j} = x_i + x_j \\u2212 x_{0}$ (see equation 2 in [Ahlmann-Eltze 2024]). Because they are working with bulk expression data, this should be interpreted as $E[x|\\\\delta_{i,j}] = E[x|\\\\delta_i] + E[x|\\\\delta_j] \\u2212 E[x|\\\\delta_{0}]$ in our notation (i.e. $x_i$ corresponds to the average expression from knockout $i$). This is just a special case of equation 5 in our paper, when you choose the embedding function $h(\\\\cdot)$ as the identity function. Thus, our disjointness test can be interpreted as a test for when the additive model in Ahlmann-Eltze [2024] will provide good predictions of double knockouts. It is also much more general, because it also provides sufficient conditions for additivity for *any* embedding function, $h(\\\\cdot)$. We use this observation as a starting point to our method: rather than try to predict double perturbation outcomes (which as Ahlmann-Eltze [2024] point out, has been fairly unsuccessful thus far), we aim to experiment only where this assumption fails. The heat maps in Figure 5 left and middle show two different estimates of where it fails. Consistent with Ahlmann-Eltze [2024], there is a lot of \\\"dark blue\\\" in the matrix where the additive model would work well, but there are also many places where it fails.\"}", "{\"comment\": \"> Q1) See weaknesses section on detecting that all relevant gene pairs have been found\\n\\nPlease see our response above. \\n\\n> Q2) How many gene pairs usually have non-linear interactions? Or interactions that are significantly different from the raw pair?\\n\\nPlease see our response above. \\n\\n> Q3) Please include a baseline in which we reveal $\\\\delta_{ij}$ on the pair $i,j$ for which $\\\\delta_i,\\\\delta_j$ are are marginally most different from $\\\\delta_0$, perhaps taking the sum of the log density ratios.\\n\\nAs clarified above, we focus on detecting when this double perturbation result **cannot** be recovered from adding single perturbations. Hence we think this baseline is not relavent to our objective. \\n\\n> Q4) How many samples are required for Appendix B.1 to become a faithful estimator of the densities? What happens if our densities are very far off?\\n\\nWe agree that sample complexity is an important concern here. Our methods do not require learning the full densities; instead, we estimate the KL divergence by learning density ratios, which is generally easier because you are essentially learning a classifier. In all our examples, we train the density ratios using a few thousand images for each perturbation class, which is reasonable in cell imaging examples.\\n\\nWe think that a theoretical treatment of the sample complexity of these methods is an important future direction, but we emphasize that the KL estimator itself is not our primary contribution. Our main contribution lies in showing *why* the KL divergence can be effectively used for interaction detection. More efficient of this divergence will naturally lead to better tests.\\n\\nThank you again for the comments on our work. We are happy to answer any further questions you have during the discussion period.\"}", "{\"comment\": \"> Two methods for testing the interaction effect are proposed, but their distinct purposes and value are unclear. Which method appears more promising, and under what conditions should each be used?\\n\\nThe two different tests target different types of interactions. Separability is somewhat more natural in that it tests whether two perturbations target the same latent variables. This is useful for discovering the target of a perturbation (e.g. a drug with an unknown target) by testing whether it interacts with a perturbation with a known target (e.g. a gene knockout).\\n\\nThe second notion, disjointedness, has a less natural interpretation but of interest for active learning because if two perturbations are disjoint, we can predict their pairwise embedding, $h_{i,j} - h_\\\\emptyset$, by summing the centered individual embeddings, $(h_i - h_\\\\emptyset) + (h_j - h_\\\\emptyset)$. While you need samples from $h_{i,j}$ to evaluate this score, by learning a posterior over the disjointness score using active matrix completion, you can predict for which $i, j$ pairs $h_{i,j}$ can be estimated using single perturbations.\\n\\nBoth scores can be used in an active learning pipeline, but they have different use cases. If your goal is finding pairs of perturbations that interact, separability has a more natural interpretation; if your goal is predicting pairwise embeddings, disjointness is better.\\n\\n\\n> How do you test for the KL divergence identity condition required for the \\\"separability test\\\"? ...\\n\\nThis is a great question, and we will clarify it in the camera-ready version. We did not establish a rigorous rejection criterion for the KL-based test. While this is less critical when the goal is to use the test statistic for active learning, we agree that a formal hypothesis testing framework is important from a statistical perspective. This issue has been addressed in a follow-up work.\\n\\n\\n> The theoretical justification for the entire Section 4 seems insufficient; could you provide additional support if available? ...\\n\\nPlease see our response above. \\n\\n> A clearer explanation of what prior information or assumptions allow us to predict interaction effects without ...\\n\\nThe low-rank assumption comes from the fact that there are correlations in the functional behavior of different genes. The assumption of Gaussian prior on the columns was made to follow prior work [1]. See response in weaknesses for further details. \\n\\n[1] Xu, Z., Shim, E., Tewari, A., & Zimmerman, P. (2022). Adaptive sampling for discovery. Advances in Neural Information Processing Systems, 35, 1114-1126.\\n\\n\\n> Clarify how the proposed approach handles the lack of repeated sampling for many (i,j) pairs? Explain the assumptions and modifications needed to apply bandit methods in this setting?\\n\\nPlease see our responses above. \\n\\n> Provide a more detailed interpretation of Figure 6, explaining what insights can be gained from the comparison to random search and conventional bandit methods?\\n\\nThe goal of the experiment in Figure 6 is two fold - the first is to validate the active matrix completion approaches for the test statistic matrix and second to validate the overall approach for discovering known biological interactions. The left and right panels demonstrate that IDS is able to discover the top values of the test statistic effectively. IDS is able to find all the top 5% values of the test statistic matrix within 50 rounds (i.e. 500 experiments out of the possible 1225). The panel on the right illustrates the number of known biological interactions discovered. This validates whether the test statistic captures the biological interactions faithfully. We find that IDS which finds the top values of the test statistic indeed finds a larger fraction of the known interactions, although the margins are smaller.\"}", "{\"comment\": \"> I see now that separability is different from mutual information. Seeing that it's the absolute difference between the joint and individual KL terms, there are many ways of computing it besides the approach that you tried (e.g. iVAEs). Since you're presenting a new framework, I think it's important to see the robustness of your conclusions across several implementation methods.\\n\\nThe separability test relies on sample-based KL divergence estimates between the perturbed group and the control group. If the KL estimator is unreliable, the conclusions drawn from the separability test may also lack reliability. However, this does not reflect a lack of robustness in the testing framework itself. The key contribution of this work lies in providing a rigorous framework for understanding what to test, rather than proposing specific KL estimators---a challenging problem in its own right. Our framework is flexible and can accommodate various estimators, allowing users to select an appropriate KL estimation method based on their data type. For example, in our synthetic experiments, a simple KNN-based KL estimator proved reliable for low-dimensional observations.\\n\\nIn our experiments, we used a KL estimation pipeline informed by recent advances in sample-based KL estimation. A detailed explanation of our approach, including the rationale for our chosen pipeline and a survey of alternative strategies from the literature, is provided in Appendix B.\", \"additional_clarifications\": \"- The reviewer suggested using iVAE (we assume this refers to identifiable VAE; [Khemakhem et al., 2020]) in our framework. However, iVAE does not provide valid KL estimates for samples from two distributions, and we are unsure of its applicability to our approach.\\n- If the reviewer is proposing latent variable identification (essentially learning causal representations for disentanglement) instead of directly testing the latent dependence from observations, we have discussed in detail in the related work section why this approach is less suitable for our purposes. It's also worth noting that iVAE requires conditionally independent latents (conditional on an environment variable) a prior, while we are tesing whether there is a dependence in the latent space.\\n\\n> I also agree with Reviewer 1aqQ, in that it's important to see your particular definition of interaction compared with some kind of baseline. While I'm not very familiar with this area, it seems implausible that this is the first time anyone has considered the notion of interaction between two vector-valued random variables.\\n\\nPlease see our response to the follow-up comments from Reviewer 1aqQ. \\n\\n> On the right-hand side, I believe $p_{Z_i}$ is a density over a scalar-valued random variable, and its argument $g^{-1}(x)$ is a vector-valued random variable with dimension $L$.\\n\\nThank you for pointing this out! This is admittedly an inprecise expression---$p_{Z_i}(g^{-1}(x))$ should be interpreted as $p_{Z_i}([g^{-1}(x)])$ where $ \\\\[ \\\\cdot \\\\]\\\\_{i} $ is the projection operator to the subspace of $Z_i$. The precise expression (updated in the manuscript) is as follows: \\n$$ \\\\frac{p(x | \\\\delta_i)}{p(x|\\\\delta_0)} = \\\\frac{p_Z(g^{-1}(x)| \\\\delta_i)\\\\left | \\\\text{det}(J(g^{-1}(x))) \\\\right|}{p_Z(g^{-1}(x)|\\\\delta_0)\\\\left | \\\\text{det}(J(g^{-1}(x))) \\\\right|} = \\\\frac{p^\\\\dagger_{Z_i}([g^{-1}(x)]\\\\_i)}{p_{Z_i}([g^{-1}(x)]\\\\_{i})}.$$\", \"title\": \"Thank you for your follow-up comments! Please see our response below.\"}", "{\"title\": \"Confusion regarding plot\", \"comment\": \"[EDIT: just saw the note to all reviewers about working on a new pdf \\u2014 no rush, thank you!]\\n\\nThank you for your extensive response. I am still slightly confused about what you mean by a \\\"calibrated\\\" posterior over the matrix $\\\\mathbf{R}$. The typical notion of calibration \\u2014 which you gave \\u2014 is a measure of how far P(Y=y|f(x) = p) typically is from p. As you note, this is measured empirically in classification settings. However, in your setting $\\\\mathbf{R}$ is a continuous density, so I am unsure what you mean by a \\\"calibrated\\\" posterior in the language of, e.g., Guo 2017. You seem to hint at it with your comment on \\\"outputs a discrete distribution over a sufficiently fine gained binning of rewards even though they are really continuous.\\\" Could you please clarify this comment? I understand the general intuition, however, that calibration is a weak condition and that a calibrated model could be inaccurate on individual instances, so calibration doesn't necessarily imply stopping at step 0.\\n\\nThe second point is interesting. However, I am unable to see any appendix figure in the PDF mirroring Fig. 5. I believe the PDFs can be updated until tomorrow \\u2014 if a PDF with a fresh copy of this Figure is uploaded soon and convincingly makes the case that the additive model is insufficient in large regions of the matrix, I would be willing to raise my score.\"}", "{\"summary\": \"This paper presents a novel active learning method to discover pairs of highly correlated variables. Concretely, we imagine that an experimentalist has access to an observations space $\\\\mathcal{X}$, which change as a function of perturbations. Crucially, some pairs of perturbations $\\\\delta_i, \\\\delta_j$ jointly operate ($\\\\delta_{ij}$) differently on $\\\\mathcal{X}$ relative to their marginal individual effects. The key idea of this paper is to estimate three densities \\u2014 $p(x|\\\\delta_i), p(x|\\\\delta_j), p(x|\\\\delta_{ij})$ \\u2014 which capture the distribution of $x$ based on marginal and joint perturbations and compare them to $p(x|\\\\delta_0)$, the world in which no perturbations are applied.\\n\\nThe authors formulate their active matrix completion problem in a statistical hypothesis testing framework. In particular, at each step, they estimate a posterior over $\\\\mathbf{R}$, a matrix of test statistics for all pairs of perturbations, and greedily selects a batch of experiments to run in the next step.\\n\\nThe authors apply this framework to CRISPR knockouts on pairs of 50 genes. Their dataset includes observations for all pairs in this 50 gene set, so the authors simulate an active experimental design by assuming the gene pair perturbations observations are masked and then simulating matrix completion by progressively revealing observations. Their method, which they term IDS, quickly discovers top gene pairs relative to standard baselines, and minimizes natural errors like regret.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper is well-written and proposes an interesting method for active experimental design. Estimating the joint density and comparing to the marginal density is an interesting idea, both for the disjointedness analysis and for the separability analysis.\\n\\nAlthough prior work (e.g., GEARS) has formulated CRISPR pair perturbation response prediction as a matrix completion problem, the active learning formulation seems novel, especially in dealing with imaging modalities (as opposed to RNAseq). The authors additionally spell out their assumptions \\u2014 such as a set of latent variables $Z$ which capture all the information about perturbations \\u2014 clearly. Their statistical testing framework is clear and well-motivated.\\n\\nThe authors additionally compare to plenty of baselines on their experiments. Although they only perform experiments on one real-world dataset, experimentation in this domain is expensive, so this is a natural choice and should not be seen as a limitation of their work.\\n\\n The appendix thoroughly explains the architecture and the methodological choices.\", \"weaknesses\": \"The use of a statistical test for matrix completion is a little poorly motivated. In particular, given an n x n matrix completion problem, at least one pair is likely to have a large test statistic. Given this is an active learning problem, multiple hypothesis testing is not as core of a concern, but I'd like the authors to discuss in their rebuttal what a \\\"null\\\" set of interaction pairs might look like. Concretely, suppose that in a module of 50 genes (so 1225 gene pairs) only 5 gene pairs interact with each other. Can this method be used to decide a \\\"stopping\\\" criterion for experimentation \\u2014 i.e. would you be able to note that the posterior on the test statistics obey the null distribution once those 5 gene pairs have been found. Could you:\\n1. Describe the null distribution of test statistics in this framework\\n2. Discuss how this method could be used to decide a stopping criterion\\n\\n\\nSecond, I think the authors did not explain why one would use imaging data for this matrix completion problem, instead of the more structured RNAseq data. Could you please justify why RNAseq data isn't used instead of imaging data?\\n\\nThird, I did not get a sense of how often the joint distribution differs from the product of the marginals in this setting. There is some prior work that very few perturbations actually obey this relation, e.g. https://www.biorxiv.org/content/10.1101/2024.09.16.613342v1.full.pdf. How often is this method necessary, and why can't we just take experiment on the pairs $\\\\delta_ij$ for which $\\\\delta_i, $\\\\delta_j$ are marginally most different from $\\\\delta_0$. Could you provide quantitative analysis on how often joint distributions significantly differ from products of marginals in this dataset?\", \"questions\": \"Q1) See weaknesses section on detecting that all relevant gene pairs have been found\\n\\nQ2) How many gene pairs usually have non-linear interactions? Or interactions that are significantly different from the raw pair?\\n\\nQ3) Please include a baseline in which we reveal $\\\\delta_ij$ on the pair $i,j$ for which $\\\\delta_i, $\\\\delta_j$ are marginally most different from $\\\\delta_0$, perhaps taking the sum of the log density ratios.\\n\\nQ4) How many samples are required for Appendix B.1 to become a faithful estimator of the densities? What happens if our densities are very far off?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes a new method for exploring the structure of a black-box system of interest by introducing perturbations, specifically focusing on identifying \\\"interacting pairs\\\"\\u2014pairs of perturbations that yield results significantly different from the effects of each perturbation applied individually. The paper proposes two methods to examine whether given pairs of perturbations exhibit a non-trivial \\\"interaction effect\\\" in this sense. The first method is a \\\"separability\\\" test, which checks if the effects of a pair of perturbations provide information beyond that obtained from each perturbation alone. The second method is a \\\"disjointedness\\\" test, which quantifies whether each perturbation in the pair influences distinct subsets of the outcome space. The separability test uses a criterion based on KL divergence, while the disjointedness test uses an MMD-based two-sample test for identity. Additionally, they treat the task of calculating each test statistic for all pairs (i, j) as a matrix completion problem and apply an active learning-based sequential experimental design using ADS (Xu et al., 2022) to identify pairs (i, j) with potentially having larger test-statistics values. Experiments demonstrate the superiority of the proposed method on benchmark synthetic data that meets the study's assumptions, and they conduct a synthetic lethality test to determine if two gene knockouts result in cell lethality, empirically confirming the method\\u2019s effectiveness in real biological systems.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses the problem of identifying pairwise interactions, specifically highlighting cases where the effect of two perturbations, such as cell lethality from double gene knockout, is entirely different from the effects of each perturbation alone. In the experiments, gene knockout was actually performed to validate effectiveness.\", \"The two proposed tests are technically intriguing. Each test is well-organized with necessary assumptions and effectively leverages existing theories, incorporating reasonable methods such as MMD-based two-sample tests (Gretton, 2012) and ASD (Xu et al., 2022) to construct a effective procedure.\", \"Despite potentially abstract topic of identifying \\\"interaction effects\\\", the Introduction clearly explains the ideas and intentions. The discussion is grounded with examples, such as validation on synthetic data and actual biological applications, making the logic easy to follow.\"], \"weaknesses\": [\"The two proposed interaction tests are not compared with any standard methods. The problem in question is not new; it has a long history in statistics as the \\\"interaction effect,\\\" where the combination of two or more factors produces an effect greater (or less) than the sum of their individual effects [1][2]. Traditional applied statistical methods (likelihood-ratio tests, two-way ANOVA, etc) have also been used for identifying synthetic lethality [3], so a comparative analysis and discussion of differences with conventional methods are necessary. The paper only includes internal comparisons among variations of the proposed method, offering limited basis for objectively assessing its effectiveness against previous methods.\", \"[1] Cox, D. R. (1984). Interaction. International Statistical Review / Revue Internationale de Statistique, 52(1), 1\\u201324. https://doi.org/10.2307/1403235\", \"[2] Amy Berrington de Gonz\\u00e1lez, & Cox, D. R. (2007). Interpretation of Interaction: A Review. The Annals of Applied Statistics, 1(2), 371\\u2013385. http://www.jstor.org/stable/4537441\", \"[3] Akimov Y, Aittokallio T. Re-defining synthetic lethality by phenotypic profiling for precision oncology. Cell Chem Biol. 2021 Mar 18;28(3):246-256. doi: 10.1016/j.chembiol.2021.01.026.\", \"Biologically, this issue is known as \\\"epistasis,\\\" a well-researched topic with substantial existing literature. While statistical interaction tests have been applied, domain knowledge and modeling of underlying structures are essential (e.g., [4][5]). Although this study treats synthetic lethality as a special case and even conducted biological experiments, it does not discuss this background, leaving unclear what value this research or its new methods can provide on the top of many traditional studies on this long-standing research topic.\", \"[4] Segr\\u00e8, D., DeLuna, A., Church, G. et al. Modular epistasis in yeast metabolism. Nat Genet 37, 77\\u201383 (2005). https://doi.org/10.1038/ng1489\", \"[5] Terada A, Okada-Hatakeyama M, Tsuda K, Sese J. Statistical significance of combinatorial regulations. Proc Natl Acad Sci U S A. 2013 Aug 6;110(32):12996-3001. doi: 10.1073/pnas.1302233110.\", \"Consequently, the adaptive sampling and bandit approaches proposed in Section 4 to identify pairs with interaction effects \\u201cwithout directly measuring pairwise effects\\u201d are unconvincing and require additional justification, clearer assumptions, and validation. By definition, determining the presence of an \\\"interaction effect\\\" would seem to require measuring both pairwise and single effects at least. For example, in synthetic lethality, there are gene pairs where knocking out either gene alone is non-lethal, but a double knockout is lethal. In this case, we have no observable clues suggesting that this pair is likely to have an \\\"interaction effect.\\\" While assumptions in Sections 3.1 and 3.2 seem related to this point, some, such as the \\\"low-rank R with a Gaussian prior on the columns\\\" in line 350, appear ad hoc and inadequately justified. This point requires thorough analysis and explanation for proper justification of the proposed method.\", \"Further, introducing bandit (specifically, best-arm identification) for this problem requires clarification. In bandit approaches, sampling each arm multiple times is generally assumed, making the theory inapplicable in settings with no sampling for many (i,j) pairs. For synthetic lethality, for instance, it would be necessary to test double knockouts across all (i, j) pairs with replicates. Figure 6 compares the method with random search and conventional bandit methods, but the meaning of this comparison is unclear, necessitating additional justification.\", \"While the paper aims to address general \\\"interaction effect\\\" identification, its validation is limited to synthetic lethality, providing insufficient evidence for its general effectiveness across other cases.\"], \"questions\": [\"For research proposing a new method, it would be essential to compare its advantages and accuracy against existing approaches. There are various statistical methods (likelihood-ratio tests, two-way ANOVA, etc) to identify interaction effects, so did you conduct comparisons with these standard methods?\", \"In terms of detecting synthetic lethality, I believe there are several studies in genetic statistics related to identifying \\\"epistasis.\\\" Did you consult this literature?\", \"What are the primary reasons you believe traditional methods are inadequate for this purpose and we need new methods?\", \"Two methods for testing the interaction effect are proposed, but their distinct purposes and value are unclear. Which method appears more promising, and under what conditions should each be used?\", \"How do you test for the KL divergence identity condition required for the \\\"separability test\\\"? Is this a simple binary decision based on whether the KL divergence is below a certain threshold, rather than a hypothesis test? If so, how did you determine the threshold?\", \"The theoretical justification for the entire Section 4 seems insufficient; could you provide additional support if available? The \\\"interaction effect\\\" as defined in this paper seems fundamentally based on comparing the effects of \\\"double perturbation\\\" with \\\"individual perturbations.\\\" Given this, the rationale for identifying promising pairs with interaction effects using adaptive sampling or bandit without actually conducting double perturbations is unconvincing. If you view this as an application of bandit\\u2019s best-arm identification, typically, double perturbation across all (i, j) pairs would need to be measured several times to gain meaningful information on arm expectations. Could you provide the following information?\", \"A clearer explanation of what prior information or assumptions allow us to predict interaction effects without measuring pairwise effects\", \"More rigorous justification for the low-rank and Gaussian prior assumptions\", \"For the evaluation of the main claim of \\\"automated discovery of pairwise interactions\\\" as suggested in the paper's title, could you:\", \"Clarify how the proposed approach handles the lack of repeated sampling for many (i,j) pairs?\", \"Explain the assumptions and modifications needed to apply bandit methods in this setting?\", \"Provide a more detailed interpretation of Figure 6, explaining what insights can be gained from the comparison to random search and conventional bandit methods?\", \"Have you conducted any validation beyond the synthetic lethality example? As the title suggests, this paper aims to propose a general method for testing \\\"interaction effects,\\\" but with no examples beyond synthetic lethality, it might be more suitable to focus claims on identifying biological \\\"epistasis.\\\"\", \"For instance, when generating test cases in software development, identifying bugs that occur only under certain condition combinations could fit within this paper\\u2019s defined \\\"interaction effect.\\\" A system might function correctly when condition 1=True or condition 2=True, but malfunction when both are true. Do you think this method is also effective for such identification? Specifically, what is the rationale for identifying this condition combination as promising without observing the behavior under condition 1=True and condition 2=True? Would this not be challenging in black-box testing?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the comments on our work. Please find our responses below.\\n\\n> The two proposed interaction tests are not compared with any standard methods ...\\n\\nWe thank the reviewer for these suggestions and will include a discussion in the camera-ready version. However, the first two classical methods from Cox and his coauthors do not apply in our setting because they focus scalar outcome variables whereas the **primary motivation for our paper is to ask how to generalize a notion of interaction to *unstructured* outcome data like the pixels in an image**. Our task is significantly more challenging because we not only have to contend with the high dimensionality of image data, but our notion of interaction has to be robust to arbitary bijective transformations of the data (because we cannot assume that we are in control of measurement process). To emphesize this point, consider this quote from Berrington de Gonz\\u00e1lez and Cox [2007] which reveals just how dependent the classical approaches to interaction testing are on the judgement of a human analyst, \\\"For a continuous and positive response variable, $y$, the transformations commonly used are logarithmic and simple powers, occasionally with a translated origin. For binary data, the logistic or sometimes probit or complementary log scale may be effective. While achieving additivity of effects is helpful, interpretability is the overriding concern. Thus, the transformation from $y$ to $y^{1/3}$ might remove an interaction but, unless y was a representation of a volume, $y^{1/3}$ might well not be a good basis for interpretation.\\\" Please refer to our response in the relevant section for further details.\\n\\nThe third reference from Akimov [2021] serves as a fantastic motivation for the problem that we solve in this paper when they argue that, \\\"emerging *phenotypic profiling methodologies* will improve the discovery of therapeutically relevant, novel SL interactions. [emphasis added]\\\". They emphasize the need for more general notions of synthetic lethality (e.g. \\\"synthetic sickness\\\"; similar to your more general point about epistasis below), but they do not provide methods for detecting these more general notions of synthetic lethality (their Figure 4 D appears to suggest using tSNE as a way of identifying relevant outcome variables; this will lead to *very* unreliable tests!). Our interaction tests solve exactly this problem. \\n\\n\\n> Biologically, this issue is known as \\\"epistasis,\\\" a well-researched topic with substantial existing ....\\n\\nYour point that \\\"domain knowledge and modeling of underlying structures are essential\\\" is likely the key point of where we disagree in the value of this paper. Domain knowledge is clearly *sufficient* if it allows one to extract a well-defined outcome variable from unstructured data. But neither of the referenced papers claim it is *necessary* to detect interactions. In this paper we give two well-defined notions of interaction that apply to unstructured data. Because they do not depend on domain knowledge, they will tend to have lower specificity than tests for known interaction targets; but this also allows them to be far more scalable in detecting interactions in high-throughput screens. If every gene pair that interacted produced a different morphological phenotype, then you'd need a different test for each phenotype; whereas our tests allow you to detect interactions without prespecifying the form of the interaction.\\n\\nWe concede that we have done a poor job of connecting with the biology literature, and we will update our references accordingly; thank you for pointing these out. That said, we emphasize that the key contribution of this work is in the interaction testing methodology itself, which we then illustrate on biological data; this is not a biology paper and does not claim to be. This is an ICLR submission: the original focus of this conference has always been on general methods for learning from unstructured data, so tests that allow us to move beyond assuming access to domain knowledge are very much the core methodological focus of the conference.\"}", "{\"comment\": \"> (2) ...\\n> (3) ...\\n\\n We agree that rigorously understanding when *prediction of test statistics for unmeasured pairs (k, l) based on observed perturbation results for other pairs (i, j)* is possible, and ensuring that the active data acquisition pipeline works effectively, are critical research directions. Validating these conditions for the gene-gene interaction problem is indeed valuable. However, each of these questions is complex enough to warrant its own dedicated research project, which is beyond the scope of this work. Our primary contribution is establishing the theoretical and methodological framework for quantifying pairwise interaction effects from unstructured observations and empirically demonstrating the feasibility of active data acquisition to optimize experimental budgets. As discussed in previous responses, these challenges are well-known in the literature but remain largely unresolved, further underscoring the value of our contribution.\\n\\nIn terms of the specific concern about model assumptions in the active matrix completion problem for our gene-gene interaction example, we adopt the following Bayesian model to recover $\\\\mathcal{R}$:\\n$$\\\\mathcal{R} = U V^T + \\\\epsilon, \\\\quad U, V \\\\in \\\\mathbb{R}^{d \\\\times r}, \\\\epsilon \\\\sim N(0, \\\\sigma^2),$$ \\nwhere $r$ is a hyperparameter smaller than $d$, encoding the low-rank structure of $\\\\mathcal{R}$. We impose a Gaussian prior distribution on the entries of $U$ and $V$, and the inference result is characterized by the posterior distribution $p(U, V| \\\\mathcal{R})$. The active matrix completion problem aims to approximate $p(U, V| \\\\mathcal{R})$ without requiring all entries of $\\\\mathcal{R}$. \\n\\nAs previously explained, the low-rank assumption of $\\\\mathcal{R}$ is justified by the morphological similarity across genes, particularly those within the same biological pathways. Our empirical results in Section 5.1 show that gene-gene interaction scores tend to cluster by relevant pathways (see lines 446\\u2013456). We have included this discussion in Line 350 --- 351 in updated manuscript.\\n\\nRegarding the choice of the prior, we emphasize that the validity of this Bayesian inference does not depend on $U$ and $V$ being generated from a Gaussian distribution. The Gaussian prior is merely a modeling assumption for $U$ and $V$, while the posterior distribution is non-Gaussian. We remark that our ADS pipeline does not require any changes if other priors for $U$ and $V$ are adopted. Poorly specified priors might lead to suboptimal posterior characterizations of $U$ and $V$, reducing the effectiveness of the active matrix completion problem. However, our empirical results suggest that the active matrix completion method performs effectively. \\n\\nWe have now added this clarification in Appendix E.3.\"}", "{\"comment\": \"> Line 509. Can you explain more about how the top 5% pairs are chosen? Are they among the known interactions?\\n\\nThe top 5% of pairs refer to those with the highest interaction scores, indicating a higher likelihood of interaction according to the separability or disjointedness test. This result (left subfigure of Figure 6) demonstrates that our active matrix completion method effectively identifies pairs with high interaction scores.\\n\\nHowever, as noted in the limitations, these high-score pairs do not necessarily correspond to known interactions.\\n\\n> Line 511. How is the regret computed?\\n\\nThe regret at step i is computed as the difference between the values of the actions selected till step i and top i values in the ground truth matrix. Note that this metric is just for validating the approach and cannot be computed in practice (since the ground truth matrix is not known).\\n\\n> Some recommendations:\\n\\nWe thank the reviewer for the advice! We will improve the visualization of our results in the camera-ready version.\\n\\nThank you again for your comments. We are happy to answer any further questions you have during the discussion period.\"}", "{\"comment\": \"> Consequently, the adaptive sampling and bandit approaches proposed in Section 4 to....\\n\\nWe believe there may be a misunderstanding here. The phrase \\u201cwithout directly measuring pairwise effects\\u201d doesn't appear anywhere in our paper. We can only predict pairwise effects *if* the interaction tests in Section 3 show there is no interaction, but running those tests *requires* measuring pairwise effects. However, if we are prepared to assume that the full matrix of pairwise test statistics, $R$, is low rank, then we can predict the *test statistics* of unseen pairs from what we measure in seen pairs using an active matrix completion approach. The low rank assumption simply amounts to assuming that there is some similarity in morphological effect across genes. In the absence of such structure, the procedure would indeed be no better than random. However, we believe that the low rank assumption is sufficiently justified from the empirical observations in Figure 5. The assumption on the columns being Gaussian is a variational approximation that follows prior work on IDS for this matrix completion problem.\\n\\n> Further, introducing bandit (specifically, best-arm identification) for this problem requires clarification ...\\n\\nIndeed it is true that in the traditional stochastic bandit setting, one assumes the ability to evaluate each arm multiple times. However, as we discuss in Section 4 (starting L305) we consider the discovery setting, originally proposed in [1], where each arm can be pulled exactly once and the set of actions (arms) available at each round shrinks. For this discovery setting, where each arm can only be pulled once, [1] studied the active matrix completion problem and showed that IDS achieves sublinear regret under the assumption that the matrix is low rank and the columns are drawn from a Gaussian prior.\\n\\n> For research proposing a new method, it would be essential to compare its advantages and accuracy against existing approaches ...\\n\\nWe did not compare to these methods because classical statistical design of experiments approaches (typically based on factor analysis) are not applicable to our high-dimensional unstructured measurement setting. The fundamental incompatibility lies in the fact that these methods typically only work for **scalar response variables**, where relationships between the response and factors (various treatments) are modelled using linear or generalized linear regressions with explicit parametric assumptions. Interactions between factors are interpreted as non-additive effects resulting from multiple treatments, with ANOVA or likelihood ratio tests used to assess the statistical significance of such non-additivity.\\n\\nIn contrast, our responses are **unstructured measurements like images**, where factor analysis fundamentally does not apply. Moreover, parametric assumptions such as normality of the response (e.g., cell images) are not sensible in this context.\\n\\nFurthermore, we want to highlight that the testing methods we employ are neither heuristic nor black-box approaches. The separability test is essentially a non-parametric likelihood-ratio test designed to evaluate the relationship described in Eq(1). Similarly, the disjointedness test is a classical two-sample test. Our key contribution lies in rigorously formulating perturbation interactions using probabilistic models, which can then be tested with well-established statistical methods.\\n\\n\\n> In terms of detecting synthetic lethality, I believe there are several studies in genetic statistics related to identifying \\\"epistasis.\\\" Did you consult this literature?\\n\\nAll of these assume domain knowledge of a scalar target variable, whereas we design tests that apply to unstructured observations. Please see our response in the weakness section.\"}", "{\"summary\": \"This paper proposes a method for detecting pairwise interactions between gene perturbations.\\nIt first proposes two statistical tests for deciding whether two perturbations are separable or disjoint.\\nThe statistical test of disjointness is employed in a greedy active matrix completion algorithm that decides which pair of perturbations to examine in the next step. The proposed method is evaluated in sythetical settings showing the validity of the statistical tests as well as in a real gene perturbation experiment where it performs better against other baselines that consider different choice of policy in the active matrix completion procedure.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed methodology is interesting.\\nThe main strength of this work is that it makes the biological experimental process more efficient and with lower costs. \\nThe theoretical claims are justified with mathematical proofs and the effectiveness of the algorithm is empirically validated.\", \"weaknesses\": \"I am not a specialist in the biological field of gene perturbations experiments, but based on my understanding I would point out the following potential weaknesses for the improvement of the paper.\\n\\n1. It is not quantified how much the biological experiments benefit from the active learning algorithm in terms of the total number of necessary perturbations.\\nHow much more efficient is your algorithm compared to a standard exhaustive approach that would consider all possible perturbations? \\n\\n2. I am not sure if it possible but it would make sense to additionally compare the proposed method against causal discovery methods that utilize interventions, for example, DCDI [1] or GIES [2], or methods specifically designed for gene regulatory networks [3]. It would be interesting to see what would be the performance of a causal discovery method for a subset of given perturbations. Ofcourse that would require transforming the data into a format that contains a single value for every node for each sample.\\n\\n[1] Brouillard, Philippe, et al. \\\"Differentiable causal discovery from interventional data.\\\" Advances in Neural Information Processing Systems 33 (2020): 21865-21877.\\n\\n[2] Hauser, Alain, and Peter B\\u00fchlmann. \\\"Characterization and greedy learning of interventional Markov equivalence classes of directed acyclic graphs.\\\" The Journal of Machine Learning Research 13.1 (2012): 2409-2464.\\n\\n[3] Aibar, Sara, et al. \\\"SCENIC: single-cell regulatory network inference and clustering.\\\" Nature methods 14.11 (2017): 1083-1086.\", \"questions\": \"I have the following questions.\\n\\n1. By pairwise interactions do you mean causal dependencies or correlations? Can you also discover causal relations?\\n2. What do you mean by \\\"unstructured data\\\"? I would say that an image is structured as a grid of pixels.\\n3. Line 094 please elaborate on what you mean \\\"by this notion of interaction\\\".\\n4. Line 374 what are 3 dimensional tabular data? Please provide the dimensions.\\n5. Line 478 \\\"Genes from the same pathway are ordered adjacently\\\". Please rephrase/explain what it means.\\n6. Line 509. Can you explain more about how the top 5% pairs are chosen? Are they among the known interactions?\\n7. Line 511. How is the regret computed?\", \"some_recommendations\": \"1. The first two sentences in the introduction require citation.\\n2. The font size in Fig. 1 is very small.\\n3. Increase the font size of the colormap in Fig. 2.\\n4. The font size in Fig. 3 (labels on images on the right) is very small.\\n5. In Fig. 3 you need to explain what the image on the right shows (presence or absence of interaction).\\n6. In Fig. 4 you should add a label in each figure and increase the font size of the colormap.\\n7. For Fig. 5 the same, nothing is visible.\\n\\nI would like to hear your opinion on my feedback first and then be willing to raise my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response.\\n\\n> We respectfully disagree with this point. Section 3 of our work, which introduces the separability and disjointness tests, has two key objectives for unstructured measurements: (1) to rigorously define the notion of interaction, and (2) to design measurable metrics that quantify the intensity of such interactions. To our knowledge, no prior work has addressed these objectives (though we welcome references to relevant literature if we have overlooked any).\\n\\nThanks for the clarification, I see now that separability is different from mutual information. Seeing that it's the absolute difference between the joint and individual KL terms, there are many ways of computing it besides the approach that you tried (e.g. iVAEs). Since you're presenting a new framework, I think it's important to see the robustness of your conclusions across several implementation methods.\\n\\nI also agree with Reviewer 1aqQ, in that it's important to see your particular definition of interaction compared with some kind of baseline. While I'm not very familiar with this area, it seems implausible that this is the first time anyone has considered the notion of interaction between two vector-valued random variables.\\n\\n> If you are referring to the equation in lines 205\\u2013207, we believe it is correct.\\n\\nOn the right-hand side, I believe $p_{Z_i}$ is a density over a scalar-valued random variable, and its argument $g^{-1}(x)$ is a vector-valued random variable with dimension $L$.\"}", "{\"comment\": \"Thank you for your comments on our work. Please find our responses below.\\n\\n> This is an applied paper, and I believe the novelty is weak.\\n\\nThis paper provides the first rigous definition of an interaction between treatment variables on unstructured data & it supports these definitions with two theorems. Further it provides the first active learning approach to image-based outcomes (images as inputs are well-studied) and supports this with rigous experimentation on real world data (that was specifically collected for this study). \\n\\nRegarding the novelty of our approach, please see our detailed point-by-point response below.\\n\\n> The separability metric essentially measures the mutual information between the latent variables corresponding to a pair of perturbations. There are many different ways to instantiate this, and it's unclear whether the authors' proposed metric itself is a valuable contribution. A similar statement also holds for the disjointness metric.\\n\\nWe respectfully disagree with this point. Section 3 of our work, which introduces the separability and disjointness tests, has two key objectives for unstructured measurements: (1) to rigorously define the notion of interaction, and (2) to design measurable metrics that quantify the intensity of such interactions. To our knowledge, no prior work has addressed these objectives (though we welcome references to relevant literature if we have overlooked any).\\n\\nWe do not claim that performing independence testing via mutual information or additivity testing via learned representations is the key novelty of our work. Instead, our primary contribution lies in rigorously characterizing the types of interactions identifiable by each method and specifying the precise modeling assumptions required for these tests. We believe the theoretical foundation provided by our work offers valuable insights for the community.\\n\\n\\n> Essentially, this paper combines many existing methods to achieve an empirical (and somewhat narrow) goal. I believe this is a valuable contribution, but it may be suited for a more applied venue.\\n\\nThank you for recognizing the value of our contribution!\\n\\nWe want to emphasize that this work establishes a general framework for efficiently detecting (potentially rare) pairwise interactions without requiring structured measurements. Given this flexible setting, we believe our methods are broadly applicable across many scientific domains (as outlined in the first paragraph of the introduction). We do not consider this to be a narrow empirical goal.\\n\\n> Can you counter my points (in weaknesses) regarding the lack of novelty in this work?\\n\\nPlease see our previous responses. \\n\\n> The right-hand side of the bottom equation on p.4 seems off; isn't a distribution over a scalar random variable, and the argument is a vector?\\n \\nIf you are referring to the equation in lines 205\\u2013207, we believe it is correct. All relevant probability density functions pertain to random variables defined on general measurable spaces $\\\\mathcal{X}$ and $\\\\mathcal{Z}$, which are not required to be scalars.\\n \\n> I think the clarity can be improved (and some space saved) if you apply a log to both sides of Equation (1) and go directly to Equation (3), without writing Equation (2).\\n\\nThank you for the suggestion. We will edit the relavent text in the camera-ready to improve clarity.\\n\\nThank you again for your comments. We are happy to answer any further questions you have during the discussion period.\"}", "{\"title\": \"Thank you for your follow-up questions! Please see our response below.\", \"comment\": \"> 1. ...\\n\\nThank you for following up; we should have been more precise. It is worth clarifying that there are two models: \\n1. [Additive] The simple additive model $\\\\hat{h}\\\\_{i,j}: R^d \\\\times R^d \\\\rightarrow R^d$ which is defined as $\\\\hat{h}\\\\_{i,j}:= \\\\vec{h}\\\\_i + \\\\vec{h}\\\\_j$ where $\\\\vec{h}\\\\_i := E[h(x)|\\\\delta_i]-E[h(x)|\\\\delta_0]$. This simple \\\"model\\\" has no trainable parameters (making it trainable is an obvious extension, but in the settings that we consider $d = 1024$ and we only see 1225 unique $i,j$ pairs, so overfitting is a real concern).\\n\\n2. [Proxy] The matrix completion model $P(R|H\\\\_t = \\\\\\\\{a^{(k)}, \\\\mathbf{R}\\\\_{i^{(k)}, j^{(k)}}\\\\\\\\}\\\\_{k=1}^{t-1}$) that aims to predict $R_{i,j} = \\\\|\\\\|h\\\\_{i,j} - \\\\hat{h}_{i,j}\\\\|\\\\|$.\\n \\n[Proxy] is the model that we referred to as being calibrated. Calibration is typically presented from a frequentist perspective as, $P(Y = \\\\hat{Y}| \\\\hat{P} = p) = p\\\\quad \\\\forall p\\\\in[0,1]$ (see e.g. Guo [2017], \\\"On the Calibration of Modern Neural Networks\\\"), where $Y$ corresponds to $R$ in our setting (to avoid needing to be careful about densities, assume to this discussion that $P(R|H_t)$ outputs a discrete distribution over a sufficiently fine gained binning of rewards even though they are really continuous). Note that this doesn't require perfect predictions, only that the model correctly \\\"knows\\\" where it is uncertain (as mentioned in our previous response, this can also be weakened to a weaker notion of calibaration, but that requires a theorem). With this in mind, a model that has a uniform distribution over all possible rewards at time 0 is still calibrated (because it \\\"knows\\\" nothing other than its priors). The calibration assumption simply rules out the possibility that the proxy model is confidently wrong about the additive model for any $i,j$ pair.\\n \\nNow, in the special case where $R_{i,j}$ is perfect for all remaining $i,j$ pairs, one can directly measure this at every round by observing zero loss from $\\\\frac{1}{|B|}\\\\sum\\\\_{i,j \\\\in B}\\\\|\\\\|\\\\vec{h}\\\\_{i,j} - (\\\\vec{h}\\\\_i +\\\\vec{h}\\\\_j)\\\\|\\\\|$ at every round of experimentation. The fact that IDS is optimizing for the worse case examples given the proxy's predictions while taking into account (calibrated) uncertainty, means that getting zero loss is a reliable upper bound on the real error $\\\\frac{1}{|\\\\Delta_t|}\\\\sum_{i,j \\\\in \\\\Delta_t}\\\\|\\\\|\\\\vec{h}_{i,j} - (\\\\vec{h}_i +\\\\vec{h}_j)\\\\|\\\\|$ (i.e. the error in all remaining pairs). \\n\\nTo be clear - this argument is from followup work so we do not plan to include it in this paper (it obviously needs more detail to be made formal), but we inlcude it here to make it clear that it is possible to derive a stopping criterion if needed.\\n\\n> 3. ....\\n\\nHow accurately the double perturbation can be predicted with the additive model will vary as a function of your encoder, $h(\\\\cdot)$. Trivially, if $h(\\\\cdot) = 0$ (i.e. an encoder that maps all images to the zero vector) will be perfectly accurate for all pairs $i,j$ (because $h_{i,j} = h_i + h_j = 0$ for all $i,j$ with the trivial encoder), but useless in practice. As a result, the best way to answer your question, is to examing figure 5 (left) and (middle) which test the sufficient condition for *any* encoder to be additive (i.e. consider the $h$ that makes the error of the additive model largest). In those plots, dark blue regions correspond to areas where the additive model is accurate and green / yellow regions are where it is incorrect. \\n\\nIf you would prefer to see this for a particular choice of encoder, we have now also included a plot of the $\\\\|\\\\|\\\\vec{h}\\\\_{i,j} - \\\\vec{h}\\\\_{i} - \\\\vec{h}\\\\_{j}\\\\|\\\\|_2$ in the appendix (final page). You will see that the plot is qualitatively similar to the MMD-based plots in figure 5. This is as expected for a representation that does not uncessarily throw information away. The accuracy of the additive model in this setting will be a function of your chosen threshold below-which you decide a prediction is \\\"accurate\\\". We will include ROC curves in the final version of this, but it is clear from these plots that the additive model is certainly *not* accurate everywhere. This is unsurprising from a biological perspective: it is well known that certain effects only show up in double perturbations: see for example, Reviewer 1aqQ's discussion on the extensive biological literature on 'epistasis' which studies these phenomena.\\n\\nFinally, we emphasize that the contribution of our work remains significant even if such non-linear pairwise interactions occur very rarely. It is precisely the rarity of these interactions that underscores the importance of an efficient active acquisition process to detect them without an exhaustive search over all pairs. And, if these interactions are in fact rare, then it is plausible that with methods like those we propose, we could discover all of them using active learning (if everything interacted, this would require far more extensive experimentation).\"}", "{\"comment\": \"Thanks for your reply, and for posting an updated pdf.\\n\\n> Our framework is flexible and can accommodate various estimators, allowing users to select an appropriate KL estimation method based on their data type.\\n\\nThis is the point I was trying to make - since the framework itself is the novelty of this work, I think we need to see evidence of this statement. I.e. as a potential user of this framework, it would be helpful to know whether the results will change dramatically if you swap the SMILE estimator with $\\\\tau = 5$ with something else.\"}" ] }
Bt1vnCnAVS
Leave-One-Out Stable Conformal Prediction
[ "Kiljae Lee", "Yuan Zhang" ]
Conformal prediction (CP) is an important tool for distribution-free predictive uncertainty quantification. Yet, a major challenge is to balance computational efficiency and prediction accuracy, particularly for multiple predictions. We propose **L**eave-**O**ne-**O**ut **Stab**le **C**onformal **P**rediction (LOO-StabCP), a novel method to speed up full conformal using algorithmic stability without sample splitting. By leveraging *leave-one-out* stability, our method is much faster in handling a large number of prediction requests compared to existing method RO-StabCP based on *replace-one* stability. We derived stability bounds for several popular machine learning tools: regularized loss minimization (RLM) and stochastic gradient descent (SGD), as well as kernel method, neural networks and bagging. Our method is theoretically justified and demonstrates superior numerical performance on synthetic and real-world data. We applied our method to a screening problem, where its effective exploitation of training data led to improved test power compared to state-of-the-art method based on split conformal.
[ "Conformal Prediction", "Algorithmic Stability", "Regularized Loss Minimization", "Stochastic Gradient Descent" ]
Accept (Poster)
https://openreview.net/pdf?id=Bt1vnCnAVS
https://openreview.net/forum?id=Bt1vnCnAVS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xSliEm8NPM", "wHzB9dZG0D", "vrTLfaskka", "uZulXThkjP", "uNqjmfay9Q", "sDNEuzePml", "rG0oo2Wah8", "lHPW9DGiQC", "jiAl2EtBir", "hikNOjaCPP", "bnry7wCieo", "X7fCOtBBsq", "QN2BZ2Y87y", "P7ofZrRkET", "No5LoCRfgN", "MyswKpGN3v", "L142SWJPfv", "Gt5p4GtvFn", "AFiLPcYaAU", "ACkI1lKbj2", "A7kjF2uoGD", "3ji9ZdV8xl", "0lcQRQAZwM", "0Jkpr4AwPs", "0E5nXdEru3" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1732418020515, 1732396837379, 1732683261324, 1732676405215, 1730930725368, 1730562404629, 1732401623357, 1732399783743, 1732397488826, 1732402216040, 1732659822594, 1732397345326, 1732396696198, 1729165760252, 1732656442449, 1732419885226, 1737523644657, 1730709787196, 1732398509706, 1732825744803, 1732397085298, 1732399115853, 1735010097444, 1732400744566, 1732784992102 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4509/Reviewer_VLB9" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Reviewer_Rbu1" ], [ "ICLR.cc/2025/Conference/Submission4509/Reviewer_3RnT" ], [ "ICLR.cc/2025/Conference/Submission4509/Reviewer_LXUa" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Reviewer_VLB9" ], [ "ICLR.cc/2025/Conference/Submission4509/Reviewer_LXUa" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4509/Reviewer_Rbu1" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Area_Chair_CBnE" ], [ "ICLR.cc/2025/Conference/Submission4509/Authors" ], [ "ICLR.cc/2025/Conference/Submission4509/Reviewer_3RnT" ] ], "structured_content_str": [ "{\"comment\": \"I appreciate the authors' efforts to improve the article. While I remain unconvinced by the clarification regarding the novelty of the work, I acknowledge the effort put into providing additional theoretical results. In light of this, I am willing to raise my score to 5.\"}", "{\"title\": \"Reply to Common Concern: More Applications/Examples\", \"comment\": \"We greatly appreciate the constructive comments from the reviewers!\\nWe recognize that there exists a common concern shared by all of you. We will respond to this question here to avoid repetitive individual replies on the same issue.\\n\\n---\\n\\n### The shared comment is:\\n\\n- **What other applications/examples does our method apply to?**\\n\\nSpecifically, we understand that reviewers have the following related/elaborating comments:\\n\\n1. Reviewers Rbu1, LXUa, and VLB9 all urged us to explore more applications of our results, specifically emphasizing the application to neural networks. \\n2. Reviewer VLB9 felt that RLM and SGD in our paper are \\\"very limited\\\" applications. \\n3. Reviewer Rbu1 mentioned that conditions such as Lipschitzness and convexity required by Theorem 2 (RLM) and Theorem 3 (SGD) may be difficult to verify, and the LOO stability bound may be difficult to calculate in practice.\\n\\n---\\n\\n### Added Applications/Examples\\nIn response to your request, we have added the following applications/examples to the revised version of this paper:\\n\\n- Kernel method (closely related to RLM) \\n- Neural networks (via our new theorem for SGD that does not require convexity) \\n- Bagging, which includes random forest as a special case \\n\\n---\\n\\n### Addressing Concerns\\n\\n**For 1.** \\nIn the rebuttal revision, we present a new theoretical result, applying our method to SGD with potentially non-convex objective functions, see Theorem 4 (highlighted in blue).\\n\\n**For 2.** \\nOur understanding is that RLM and/or SGD can serve as the basic tools in many learning problems. For example, RLM covers many likelihood-based methods in statistical learning. In the revised version, we also raised the well-known kernel method as a special case of RLM, demonstrating its generality. Also, neural networks with $\\\\ell_2$ regularization can also be formulated in the format of RLM. As for SGD, the application is even wider since it is an optimization tool that can serve any learning task that involves optimization. For example, it has been a popular optimization tool for fitting deep neural networks (DNN). In our new theory, we apply SGD to analyze neural networks (Theorem 4).\\n\\nIn the revised version, our analysis for the newly added application bagging is different than RLM and SGD, see Theorem 5.\\n\\n**For 3.** \\nIn this paper, we adopt the deterministic definition of algorithmic stability as in Ndiaye (2022), which considers the worst case. Our approach is thus different from the distributional notion of algorithmic stability as in Barber et al. (2021), which may require some knowledge (e.g., light-tail) about the data distribution. Consequently, these Lipschitz, smoothness and convexity constants, whichever applicable, are the properties of the deterministic objective functions.\\n\\nFor example, to obtain those constants in the case of robust linear regression that we used, we could directly compute derivatives and find the bounds. Similarly, for neural networks composed of simple functions, the bounds can still be derived using the chain rule. In these examples, all needed components of the stability bound can be directly derived or read from the problem set-up and the method's formulation.\\n\\nMeanwhile, we do agree with you that there exists no universal analytical formula for stability bounds for any prediction method $f$. This is the reality not only for our method, but for all algorithmic stability results that we know of (Soloff et al, 2024; Liang et al, 2023; Ndiaye, 2022; Wang et al, 2023). The stability bound formula for each application/example needs to be developed by researchers. We deem the development of these bounds for more prediction methods as an intriguing venue for future work.\"}", "{\"comment\": \"Thank you for your thoughtful review and for maintaining a positive opinion about our work. In particular, your suggestion to conduct additional numerical experiments greatly contributed to enhancing the persuasiveness of our paper. We plan to further develop our framework by conducting several follow-up studies, and your insight will be a great help in this regard.\\n\\nWe sincerely appreciate it!\"}", "{\"comment\": \"Thank you for the response. I will keep my score and tend to accept.\"}", "{\"summary\": \"This paper proposes Leave-One-Out Stable Conformal Prediction, a novel method to speed up full conformal using algorithmic stability without sample splitting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper is clearly written and easy to follow.\", \"weaknesses\": [\"Typos and minor issues\", \"L103: notation [1:m] is not defined\", \"L190: the objective function (is) often highly nonconvex\"], \"questions\": [\"L33: Do we require $\\\\mathcal{Y} \\\\subseteq \\\\mathbb{R}$? Do we require the marginal distribution $P_Y$ is continuous for the non-conformity scores to be uniformly distributed (L81)?\", \"L34: Is D_test drawn from the same distribution $P_{X,Y}$ as D? Is it iid drawn? If yes, then is it equivalent to consider a single test example (e.g., Equation (1) only need to hold for one data point instead of for all $j \\\\in [m]$)? If not, then why all non-conformity scores are exchangeable (L78)?\", \"L81: Does this hold for any alpha in [0,1]? How do we obtained this from the fact that the rank is uniformly distributed over $\\\\{1, \\\\dots, n+1\\\\}$?\", \"L81:` $\\\\mathcal{Q}_{1-\\\\alpha}$` is not defined. Is it lower quantile function `$\\\\mathcal{Q}_{p}:= \\\\inf \\\\{x: F(x) \\\\geq p\\\\}$`?\", \"L86 (Equation 2)\\\" Shouldn't $1-\\\\alpha$ be slightly increased to $\\\\frac{\\\\lceil (1-\\\\alpha)(n+1) \\\\rceil}{n}$ for this to hold in finite sample? See Equation (19) of https://www.stat.berkeley.edu/~ryantibs/statlearn-s23/lectures/conformal.pdf.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Leave-One-Out Stable Conformal Prediction (LOO-StabCP), a novel conformal prediction approach that enhances computational efficiency for multiple predictions while ensuring coverage guarantees. LOO-StabCP builds on prior work by Ndiaye (2022), which utilized replace-one stability, by leveraging leave-one-out stability to require only a single model fit on the training data, regardless of the number of test points. This eliminates the need for computationally intensive refits for each individual test point. The authors validate LOO-StabCP with theoretical stability bounds for regularized loss minimization (RLM) and stochastic gradient descent (SGD)--methods widely used in modern machine learning. Extensive experiments on synthetic and real-world datasets show that LOO-StabCP matches or exceeds existing methods in predictive accuracy, efficiency, and computational speed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper makes a contribution to conformal prediction by addressing a critical need for efficient uncertainty quantification in large-scale applications, especially when predictions are needed across multiple test points. Building on prior work that used algorithmic stability for conformal prediction, LOO-StabCP introduces a novel approach to leave-one-out stability by treating the model trained on the dataset (=training dataset $\\\\mathcal{D}$) as one generated by \\u201cleaving out\\u201d each test point from an augmented set (as noted in Section 3.1 following Algorithm 1). This refinement results in a significant speedup, especially valuable in scenarios with numerous predictions.\\n\\nThe authors provide a solid theoretical foundation to support their proposed LOO-StabCP by deriving stability bounds for Regularized Loss Minimization (RLM) and Stochastic Gradient Descent (SGD), strengthening the method\\u2019s rigor. Additionally, empirical results on both synthetic and real-world datasets effectively demonstrate LOO-StabCP\\u2019s advantages in computational speed, prediction interval size and statistical power. The comparison of computational complexity across methods (e.g., FullCP, SplitCP, RO-StabCP), summarized in Table 1, clearly showcases the efficiency gains, and the illustration of practical applications in screening tasks, described in Section 6, highlights the method\\u2019s utility.\", \"weaknesses\": \"Overall, I believe this is a well-written paper. However, some refinements in presentation could potentially enhance clarity and completeness, even though it is already quite solid. Here are some specific suggestions.\\n\\n**1. Adding Remarks & Prospects for Broader Examples/Applications:**\\nAdding comments on the tightness of stability bounds for the RLM and SGD examples in Section 3.2 would improve the interpretability of results, and clarify the sharpness or looseness of the current results. \\nAdditionally, while RLM and SGD are valuable examples, discussing potential extensions to other models or methods, along with conjectures or insights about the scope of the applications, would help readers better understand the broader applicability of the proposed approach. Furthermore, this is a minor point, but for improved parallelism between Sections 3.2.1 and 3.2.2, it may be worth noting that RLM and SGD are not strictly parallel choices: RLM represents a problem formulation, whereas SGD is an optimization algorithm used to solve such problems.\\n\\n**2. Augmenting Experiments:**\\nAs one way to o assess the tightness of the proposed LOO-StabCP prediction intervals, the authors could compare the intervals obtained by LOO-StabCP against the tightest possible prediction set with coverage $1-\\\\alpha$ as a benchmark, which can be calculated in numerical experiments, for instance, using the $\\\\alpha/2$- and $(1-\\\\alpha/2)$-quantiles of instantiated predictions from multiple runs. This comparison would provide further insights into the tightness of the proposed method (as well as other CP methods being considered in the study).\\nAlso, broadening the simulation studies to include a wider range of model types would offer greater assurance that LOO-StabCP's performance gains are not specific to the tested settings. Expanding these results, perhaps in an Appendix, would help readers evaluate the method\\u2019s effectiveness across diverse applications.\\n\\n**3. Ensuring Notation Consistency and Simplification:** \\nEnsuring that all notation is defined before use (e.g., $[1:n]$) would improve accessibility. Simplifying notation where possible--for instance, by fixing $j=1$ in Section 2 to reduce complexity without sacrificing rigor--could enhance readability. While the authors may have intended to highlight the dependence on the test point index $j$, simplifying this notation in Section 2 and then generalizing back in Section 3 could make the initial section more approachable.\", \"questions\": [\"Here is a list of questions and minor suggestions.\", \"*Line 34:* I think $Y_{n+j}$ should not be included in $\\\\mathcal{D}_{\\\\textrm{test}}$.\", \"*Line 58:* Consider using \\u201cRO-StabCP\\u201d for clarity instead of \\u201cin Ndiaye (2022),\\u201d which is already referenced in the preceding paragraph.\", \"*Line 70:* The phrase \\u201cguess $Y_{n+j}$ with $y$\\u201d may not read clearly.\", \"*Line 75:* Specifying the range, e.g., by \\u201c\\u2026swapped for $i, i' \\\\in [n] \\\\cup \\\\{n+j\\\\}$\\u201d would be clearer.\", \"*Line 90:* Typo: $y$ should be replaced by $i$.\", \"*Line 103 (Definition 1):* (1) Define $[1:m]$ notation; (2) clarify the quantifier regarding $\\\\mathcal{D}$.\", \"*Line 108:* \\u201cRecall\\u201d may be clearer than \\u201cLet.\\u201d\", \"*Line 140:* Consider adding remarks right after Definition 2 to discuss (1) the pursuit of adaptive parameters rather than uniform bounds to obtain sharper stability estimates, (2) the practicality of assuming known parameters, and (3) how these parameters impact the accuracy and robustness of prediction intervals. Although these points are addressed in later sections, readers would benefit from a brief mention here.\", \"*Line 154:* Using varied parenthesis sizes or brackets might improve readability.\", \"*Line 202:* Recall the meaning of the augmented data $\\\\mathcal{D}^y_j$ from Line 70 for context.\", \"*Line 367:* The statement \\u201cThis leads to wider prediction intervals for all methods, and particularly for SplitCP, more variability in prediction interval length\\u201d is not clear to me. It suggests the prediction interval of SplitCP becomes particularly wider due to the increase $m=1 \\\\to m=100$. However, SplitCP already appears to vary similarly at both $m=1$ and $m=100$, while the other methods vary more at $m=100$.\", \"*Line 369:* While the authors note that derandomization (Gasparin \\\\& Ramdas, 2024) would incur extra computational costs, how does LOO-StabCP compare with derandomization in other aspects such as prediction accuracy, coverage, and stability?\", \"*Line 465:* The comment \\u201cCompared to cfBH, our method is more powerful\\u201d could benefit from further clarification. How is this conclusion drawn from Figure 3 and Table 3?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"References\", \"comment\": \"[1] Rina Foygel Barber, Emmanuel J. Cand\\u00e8s, Aaditya Ramdas, and Ryan J. Tibshirani. Predictive inference with the jackknife+. *The Annals of Statistics*, 49(1):486\\u2013507, 2021. doi: 10.1214/20-AOS1965. https://doi.org/10.1214/20-AOS1965.\\n\\n[2] Eugene Ndiaye. Stable conformal prediction sets. In *International Conference on Machine Learning*, pp. 16462\\u201316479. PMLR, 2022.\\n\\n[3] Jake A Soloff, Rina Foygel Barber, and Rebecca Willett. Bagging provides assumption-free stability. *Journal of Machine Learning Research*, 25(131):1\\u201335, 2024.\\n\\n[4] Ruiting Liang and Rina Foygel Barber. Algorithmic stability implies training-conditional coverage for distribution-free prediction methods. *arXiv preprint arXiv:2311.04295*, 2023.\\n\\n[5] Yan Wang, Huaiqing Wu, and Dan Nettleton. Stability of random forests and coverage of random-forest prediction intervals. *Advances in Neural Information Processing Systems*, 36:31558\\u201331569, 2023.\"}", "{\"comment\": \"Thank you for your comments.\\nWe understand that your main concern regards novelty, as **you commented**: \\n\\n*\\u201cThe method proposed by the authors is quite similar to the baseline established by Ndiaye (2022). Given this foundation, the extension presented by the authors appears somewhat straightforward, leading to a limited contribution.\\u201d*\\n\\n**Response:**\\n\\nWe would like to clarify several important points, as follows:\\n\\n**First**, while our work builds on the series of work by Lei and Wasserman (2014), Barber et al. (2021), and Ndiaye (2022), our LOO-StabCP method presents a novel stability mechanism that parallels Ndiaye's RO-StabCP. The field of stable conformal prediction naturally follows a common narration flow: first propose a stability type, then establish theoretical guarantees, and finally analyze a few examples/applications, accompanied by numerical studies. However, the key innovations lie in the specific stability mechanisms and their resulting theoretical and computational characteristics. Our theoretical results (e.g., compare the bounds in Theorem 2) and numerical studies demonstrate that our proposed mechanism has significant advantages.\\n\\nWe intentionally maintained a parallel structure to Ndiaye (2022) for an easier comparison of results, especially for readers familiar with that work. This similarity in the order of presenting scientific contents is non-essential and should not diminish the novelty of our work because the scientific contents themselves are original. Our method achieves significant improvements in both computational efficiency and accuracy for multiple prediction problems and therefore should not be considered a very minor variation of Ndiaye (2022).\\n\\n**Second**, we emphasize that the leave-one-out stability, compared to replace-one, represents a non-obvious, even a bit counter-intuitive, methodological innovation. Recall that both Ndiaye (2022) and our work are variations of full conformal. Our work challenges the doctrine in full conformal prediction that including the $(n+1)$th data point in model fitting is required. Even Ndiaye (2022) still follows this rule. Beginners to conformal prediction are usually warned by their teachers that training only on data points $1,\\\\ldots,n$ would be a mistake for full conformal. In this regard, our leave-one-out stability hits a conceptual blind spot. Like many methodological innovations, the simplicity of our approach becomes more apparent in hindsight (than in foresight).\\n\\n**Third**, our proposal is driven by strong practical need and effectively addresses a key issue in Ndiaye's RO-StabCP: computational inefficiency in handling many prediction requests. With the recent development in combining FDR control with conformal prediction, computational efficiency of the conformal prediction method has become a spotlight issue for distribution-free statistical inference (Jin \\\\& Candes, 2022). \\n\\nIn the revised paper, motivated by the comments of all reviewers, we also expanded the applications/examples much beyond Ndiaye (2022). This also represents significantly nontrivial advancement. See the color-highlighted contents in the revised paper, including the appendices.\\n\\n---\\n\\n**You also commented**: \\n\\n*\\u201c... Since conformal prediction is elegant for its wide applicability as a wrapper around any machine learning model, the examples RLM and SGD discussed in the paper are very limited. This limitation is particularly notable in the current landscape, where sophisticated deep learning models and LLMs are prevalent.\\u201d* \\n\\nand \\n\\n*\\u201cThe only issue I care about is whether commonly used lightweight machine learning algorithms like the random forest or simple neural networks can satisfy the LOO stability proposed in the paper. It will be a major issue and I will be willing to raise my score with extra theoretical results.\\u201d* \\n\\n**Response:**\\n\\nThank you for spurring us to explore more applications of our method. \\nWe understand that these points are closely related to the common comment made by most reviewers on whether our method is applicable to more examples, including (deep) neural networks. Therefore, we address these two comments in our general reply to all reviewers. Please refer to bullet point 2 for our reply to your comment questioning the apparent limitation of our applications, and other bullet points there for our new results applying our method to neural networks via SGD (Theorem 4 in the revised manuscript).\"}", "{\"comment\": \"## Your Questions:\\n\\n---\\n\\n1. **Question:** \\n \\\"*I think it is necessary to discuss more and carefully about applying LOO-StabCP to deep learning algorithms for reasons outlined in the weaknesses box.*\\\"\\n\\n**Response:** \\nWe have addressed these points in detail in our earlier responses. Kindly refer to those responses, as we believe they address your questions. Thank you again for your thoughtful comments about this issue.\\n\\n---\\n\\n2. **Question:** \\n *As introduced in section 1, derandomization methods can address the issue of decreasing accuracy caused by randomly split but increase computational cost. I think it is better to demonstrate the performance of some derandomization method and compare it with LOO-StabCP to illustrate the differences between the two in all aspects.*\\n\\n**Response:** \\nThank you for pointing out this important aspect, which we had previously overlooked. In the revised manuscript, we explicitly addressed this in the very last part of Section 5 and provided a detailed discussion in Appendix B. In particular, we included a detailed comparison with two derandomization methods: which we named MM-SplitCP (Solari and Djordjilovi\\u0107, 2022) and EM-SplitCP (Gasparin and Ramdas, 2024).\\n\\nOur numerical experiments, presented in Figures 7 and 8, show that while derandomization methods effectively stabilize prediction intervals, they often produce conservative results with wider intervals. In contrast, LOO-StabCP achieves comparable stability without relying on random splits, avoiding additional randomness altogether. Moreover, LOO-StabCP produces tighter prediction intervals with valid coverage, maintaining competitive accuracy. Importantly, LOO-StabCP is computationally far more efficient, as it requires only a single model fit, whereas derandomization methods involve multiple fits across different splits, significantly increasing their computational cost.\\n\\nThis comparison highlights the practical strengths of LOO-StabCP, particularly in scenarios where computational efficiency is critical. Your suggestion allowed us to address this key point, and we are grateful for your valuable feedback.\\n\\n---\\n\\n3. **Question:** \\n \\\"*The authors compute stability-adjusted p-values for multiple selection in section 6. I think it is better to verify the validity of p-values in (7) and state it as a proposition for completeness.*\\\"\\n\\n**Response:** \\nThis is a good question!\\n\\nWe provide a short proof that the $p_j^{\\\\rm LOO}$ defined in Equation (7) is indeed a valid p-value. Let $\\\\tilde{f}$ denote an oracle $f$ trained on the data $X_1, \\\\ldots, X_n$ and $X_{n+j}$. Then we know that the oracle p-value is defined as:\\n\\n$$\\np_j^{\\\\rm oracle} = \\\\frac{\\\\sum_{i=1}^n \\\\mathbf{1}\\\\\\\\{S(Y_i, \\\\tilde{f}(X_i)) < S(Y_{n+j}, \\\\tilde{f}(X_{n+j}))\\\\\\\\} + 1}{n+1},\\n$$\\n\\nand is a valid p-value because the rank of $S(Y_{n+j}, \\\\tilde{f}(X_{n+j}))$ among all oracle non-conformity scores is discrete uniform. To compare $p_j^{\\\\rm oracle}$ with $p_j^{\\\\rm LOO}$, we rely on the definition of LOO stability. Specifically, for all $i = 1, \\\\ldots, n$ and $i = n+j$, we have:\\n$$\\n|S(Y_i, \\\\tilde{f}(X_i)) - S(Y_i, \\\\hat{f}(X_i))| \\\\leq \\\\tau_{i,j}^{\\\\rm LOO},\\n$$\\nwhere $\\\\hat{f}$ is the model trained without the $j$th data point. Under the null hypothesis, where $Y_{n+j} \\\\leq c_j$, we also have:\\n$$\\nS(Y_{n+j}, \\\\hat{f}(X_{n+j})) \\\\leq S(c_j, \\\\hat{f}(X_{n+j})).\\n$$\\nCombining these two inequalities, we get:\\n$$\\nS(Y_i, \\\\tilde{f}(X_i)) < S(Y_{n+j}, \\\\tilde{f}(X_{n+j}))\\n\\\\Rightarrow S(Y_i, \\\\hat{f}(X_i)) - \\\\tau_{i,j}^{\\\\rm LOO} < S(c_j, \\\\hat{f}(X_{n+j})) + \\\\tau_{n+j,j}^{\\\\rm LOO}.\\n$$\\nThus, the indicator function satisfies:\\n$$\\n\\\\mathbf{1}\\\\\\\\{S(Y_i, \\\\tilde{f}(X_i)) < S(Y_{n+j}, \\\\tilde{f}(X_{n+j}))\\\\\\\\} \\\\leq \\\\mathbf{1}\\\\\\\\{S(Y_i, \\\\hat{f}(X_i)) - \\\\tau_{i,j}^{\\\\rm LOO} < S(c_j, \\\\hat{f}(X_{n+j})) + \\\\tau_{n+j,j}^{\\\\rm LOO}\\\\\\\\}.\\n$$\\nSumming these indicator values, it follows that:\\n$$\\np_j^{\\\\rm oracle} \\\\leq p_j^{\\\\rm LOO}.\\n$$\\nBecause $p_j^{\\\\rm oracle}$ is a valid p-value under the null hypothesis, $p_j^{\\\\rm LOO}$ also satisfies the validity property.\\n\\nAs a side note, while addressing this question, we identified typos in Equations (6) and (7), which we have now corrected. Although these were typographical errors in the equations, they arose from shifting perspectives in representing the methodology when editing the draft for the first submission. Importantly, this did not affect our numerical results, as the implementation was consistent with the corrected representation. We sincerely thank you for your comment, which gave us the opportunity to identify and correct these typos.\\n\\n---\\n\\n## References\\n\\n[1] Aldo Solari and Vera Djordjilovi\\u0107. Multi split conformal prediction. *Statistics & Probability Letters*, 184:109395, 2022.\\n\\n[2] Matteo Gasparin and Aaditya Ramdas. Merging uncertainty sets via majority vote. *arXiv preprint arXiv:2401.09379*, 2024.\"}", "{\"title\": \"References:\", \"comment\": \"[1] Jing Lei and Larry Wasserman. Distribution-free prediction bands for non-parametric regression. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 76(1):71\\u201396, 2014.\\n\\n[2] Rina Foygel Barber, Emmanuel J. Cand\\u00e8s, Aaditya Ramdas, and Ryan J. Tibshirani. Predictive inference with the jackknife+. *The Annals of Statistics*, 49(1):486\\u2013507, 2021. doi: 10.1214/20-AOS1965. https://doi.org/10.1214/20-AOS1965.\\n\\n[3] Eugene Ndiaye. Stable conformal prediction sets. In *International Conference on Machine Learning*, pp. 16462\\u201316479. PMLR, 2022.\\n\\n[4] Ying Jin and Emmanuel J Cand\\u00e8s. Selection by prediction with conformal p-values. *Journal of Machine Learning Research*, 24(244):1\\u201341, 2023.\"}", "{\"comment\": \"We sincerely thank you for maintaining your positive opinion of our paper! Addressing the various thoughtful and insightful concerns you raised allowed us to delve deeper into our work, and we are confident that it has significantly helped us develop our discussion to a more profound level. Once again, we truly appreciate your invaluable support.\"}", "{\"comment\": \"Thank you very much for your valuable suggestions. Here is a point-by-point response to your concerns.\\n\\n\\n## Your comments:\\n\\n---\\n\\n1. **Comment:**\\n \\\"*The authors introduce leave-one-out algorithmic stability for stability correction. However, it is difficult to calculate the stability bounds $\\\\tau_{i,j}^\\\\mathrm{LOO}$ when a complex deep learning algorithm is chosen to fit the model.*\\\"\\n\\n2. **Comment:**\\n \\\"*When trying to derive the LOO stability bound of RLM and SGD described in section 3.2, similar problems are encountered as mentioned above: the conditions in Theorems 2 \\\\& 3 are difficult to verify.*\\\"\\n\\n**Response:** \\nThank you for your insightful comments 1 \\\\& 2. We address them together.\\n\\nWhile neural networks are structurally complex, they are built of linear functions and activation functions, combined through composition. Using the chain rule, we can compute stability bounds by calculating derivatives layer by layer, as long as we choose an activation function with Lipschitz derivatives\\u2014common nonlinear activation functions, such as the sigmoid function or its scaled variant, the hyperbolic tangent, are Lipschitz. This process enables us to determine the Lipschitz constants for each layer and combine them to obtain the overall Lipschitz constant for the network.\\n\\nAlternatively, we can also use practical approximations to these stability bounds. For instance, rough estimates of the stability terms can be calculated by analyzing interactions between training and test data points in the feature space (see Appendix A.2 for more details). These approximations provide a feasible way to assess stability without requiring precise constants. As demonstrated in our numerical experiments (Figure 5), even with these approximations, LOO-StabCP maintained valid coverage and achieved competitive prediction interval tightness. This highlights the practical feasibility of applying LOO-StabCP to neural networks.\\n\\n---\\n\\n3. **Comment:** \\n \\\"*Numerical experiments are insufficient: the covariates of synthetic data are set to be independent; the prediction algorithms used are all robust linear regression. More complex settings and algorithms should be considered and compared.*\\\"\\n\\n**Response:** \\nWe agree that the original experiments relied on simplified settings, such as independent covariates and the use of robust linear regression, which might not fully demonstrate the generality of our method. To address this, we have significantly expanded the scope of our numerical experiments in the revised manuscript to include more complex settings and algorithms.\\n\\nFirst, we introduced dependencies in the synthetic data by incorporating an AR(1) covariance structure for the covariates, with a correlation parameter $\\\\rho = 0.5$. This change ensures that the synthetic data better reflects real-world scenarios where covariates are often correlated.\\n\\nSecond, we extended the experiments to include more sophisticated models, such as kernelized robust regression (Section 3.2.3 along with Appendix A.1) and neural networks (Section 3.2.3 \\\\& 5 along with Appendix A.2). Specifically, we used radial basis function (RBF) and polynomial kernels to capture nonlinear patterns in the data, as well as neural networks to evaluate our method's applicability to deep learning models.\\n\\nThe results, presented in Figures 3, 5, and 6, demonstrate that LOO-StabCP performs robustly across all these settings. It maintained valid coverage while producing tight prediction intervals, even under complex data structures and model configurations. These findings highlight the versatility and practicality of our method beyond the simpler cases considered in the original submission. We appreciate your suggestion, as it has allowed us to showcase the broader applicability of LOO-StabCP and enhance the rigor of our empirical evaluations.\"}", "{\"title\": \"Revised Manuscript and Responses\", \"comment\": \"Dear AC and reviewers,\\n\\nThank you so much for your invaluable comments on our work!\\nWe have updated the paper with new theoretical and numerical results, along with point-by-point responses to your concerns.\\nWe look forward to the discussion.\\n\\nThank you! \\\\\\nAnonymous Authors\"}", "{\"summary\": \"This paper proposes a leave-one-out algorithm stability definition, which the authors utilize to reduce the computational burden of the full conformal prediction method. The finite-sample validity of the prediction interval is proved. The authors have provided some experimental results to show the superiority of the proposed method in computation speed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed method does reduce the computational burden by circumventing model refitting when computing prediction intervals for distinct test points. The computational complexity is then reduced by $m$ times. This is also validated by the numerical experiments.\", \"weaknesses\": \"The method proposed by the authors is quite similar to the baseline established by Ndiaye (2022). Given this foundation, the extension presented by the authors appears somewhat straightforward, leading to a limited contribution. While the proposed method significantly reduces computational burden, it relies on the assumption that the learning algorithm adheres to the LOO stability assumption. Since conformal prediction is elegant for its wide applicability as a wrapper around any machine learning model, the examples RLM and SGD discussed in the paper are very limited. This limitation is particularly notable in the current landscape, where sophisticated deep learning models and LLMs are prevalent.\", \"questions\": \"1. The only issue I care about is whether commonly used lightweight machine learning algorithms like the random forest or simple neural networks can satisfy the LOO stability proposed in the paper. It will be a major issue and I will be willing to raise my score with extra theoretical results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' efforts to address the questions and comments raised during the review process. The revisions and responses have satisfactorily addressed my concerns and questions, and hence, I maintain my positive rating for the paper.\"}", "{\"comment\": \"Thank you for raising the score. We completely understand that different readers may have different opinions about a given paper, as other reviewers find our paper's contributions novel. We appreciate your comments which encouraged us to explore further applications of our method. Time is limited (since the reviews posted), but we will further work on directions pointed out by you and other reviewers in the future. Thank you again.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper proposes Leave-One-Out Stable Conformal Prediction (LOO-StabCP) to speed up full conformal using algorithmic stability without sample splitting for better balance of computational efficiency and prediction accuracy. This method is much faster in handling a large number of prediction requests compared to existing method RO-StabCP based on replace-one stability. The authors show that their method is theoretically justified and demonstrates superior numerical performance on synthetic and real-world data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper proposes a novel method to address the problem in conformal prediction lies in balancing computation cost with prediction accuracy.\", \"Numerical results in this paper show that the proposed method achieves a competitive average coverage and a higher power compared to existing methods.\"], \"weaknesses\": [\"The authors introduce leave-one-out algorithmic stability for stability correction. However, it is difficult to calculate the stability bounds $\\\\tau_{i,j}^{\\\\mathrm{LOO}}$ when a complex deep learning algorithm is chosen to fit the model.\", \"When trying to derive the LOO stability bound of RLM and SGD described in section 3.2, similar problems are encountered as mentioned above: the conditions in Theorems 2 \\\\& 3 are difficult to verify.\", \"Numerical experiments are insufficient: the covariates of synthetic data are set to be independent; the prediction algorithms used are all robust linear regression. More complex settings and algorithms should be considered and compared.\"], \"questions\": [\"I think it is necessary to discuss more and carefully about applying LOO-StabCP to deep learning algorithms for reasons outlined in the weaknesses box.\", \"As introduced in section 1, derandomization methods can address the issue of decreasing accuracy causing by randomly split but increase computational cost. I think it is better to demonstrate the performance of some derandomization method and compare it with LOO-StabCP to illustate the differences between the two in all aspects.\", \"The authors compute stability-adjusted p-values for multiple selection in section 6. I think it is better to verify the validity of p-values in (7) and state it as a proposotion for completeness.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you so much for reading our paper very carefully and for the numerous insightful comments. Here is our point-by-point response to your comments.\\n\\n## Main comments\\n\\n---\\n\\n1. **Adding Remarks & Prospects for Broader Examples/Applications**\\n\\nThank you for this comment. We understand that this is a common concern shared by other reviewers. Therefore, we address it in the general reply to all reviewers.\\n\\n---\\n\\n2. **Augmenting Experiments**\\n\\nWe sincerely thank you for your insightful suggestion to assess the tightness of the proposed LOO-StabCP prediction intervals by comparing them to the theoretically tightest possible prediction intervals under the true data distribution. This is indeed an excellent point that has enhanced the interpretability of our results. In particular, in our simulation study, where the data distribution is known, the tightest possible interval for all predictions corresponds to twice the $(1-\\\\frac{\\\\alpha}{2})$-quantile of the standard normal distribution.\\n\\nTo address your suggestion, we incorporated this information into our results by adding horizontal dashed lines to the plots for interval length, similar to how desired coverage is presented in coverage plots. See Figures 1, 5, and 7. This inclusion provided valuable insights. For instance, in Figure 1, which uses a linear model, the results closely approach the tightest interval under the linear data-generating process (DGP). However, in nonlinear scenarios, model misspecification leads to wider intervals for all CP methods, as expected. Furthermore, in Figure 5, we observe that the addition of kernel methods under the nonlinear setting brings the intervals closer to the conceptual tightest width, demonstrating their effectiveness in mitigating model misspecification. We are grateful for your suggestion, which not only improved our analysis but also provided a deeper understanding of the performance of LOO-StabCP and other CP methods in various scenarios.\\n\\nYou also asked us to test our method's performance in more settings. In the revision, we added numerical studies for the following application scenarios:\\n - Kernel method (see Figure 5)\\n - One- and two-layer neural networks (see Figures 3 and 6)\\n - Comparison with derandomized SplitCP, in response to the comments by you and Reviewer Rbu1 (Appendix Section B)\\n\\n---\\n\\n3. **Ensuring Notation Consistency and Simplification**\\n\\nThank you for this comment.\\n\\nIn fact, this was exactly a point that we were torn and spent a lot of time discussing when writing up this paper. Exactly as you mentioned, \\\"what (the non-conformity score) $S$ depends on\\\" is important. The reason why our method can speed up over RO-StabCP is because it depends on less things. Pressure also came from the strict page limit, compared to the amount of contents we wish to present.\\n\\nWe completely agree with you that it is desirable to reduce notation and make symbols as light as possible, especially for the introduction part. Following your advice, we have abbreviated $S_{i,j}^y$ as $S_i^y$ for the full and split conformal, as well as suppressed the dependency of the prediction set on $j$ there.\\nMeanwhile, please allow us to explain the several-fold difficulties we faced and considerations.\\n\\nOn the dependency of $S$ on $i$: as you know, we want to show the source of the heavier computational burden of RO-StabCP, therefore, we explicitly wrote down the dependency of its non-conformity scores on $j$. But RO-StabCP builds upon full conformal, in the review of full conformal, we might want to avoid disconnection of notation (full conformal's notation is simplified, while RO-StabCP is not). Also, in SplitCP, most non-conformity scores do not depend on $y$ and $j$. Abbreviating the notation throughout Section 2 might make this difference less obvious. On the dependency of $S$ on $y$: we explicitly wrote the dependency of $S$ on $y$ to simplify the justification of full conformal, RO-StabCP and LOO-StabCP.\\n\\nIf we write Section 2 without the symbol $j$, then at the beginning of Section 3, we worry about the space it would cost to properly restate the previous results, especially for RO-StabCP, adding index $j$ back. Our manuscript, which is currently not completely doing this, is already at the 10 page limit. Finally, our understanding is that our readers might tend to already have some exposure to conformal prediction and won't use our paper as the entry point to this area.\\n\\nWith these considerations, we simplified the notation for full and split conformal reviews, while still keeping the full notation for RO-StabCP. We added a heads-up for readers at the beginning of RO-StabCP: *\\\"From now on, we will switch back to the full notation for $S$ and no longer abbreviate $S_{i,j}^y$ as $S_i^y$.\\\"*\\n\\nWe would appreciate your further feedback on the clarity of the new version's presentation of Section 2, as well as any advice from you on how to reduce notation without causing the aforementioned potential issues. Thank you so much!\"}", "{\"comment\": \"Thank you for keeping your positive view of our work. We look forward to building on this framework through further studies, and your input will be invaluable as we move forward.\\n\\nWe truly appreciate your support!\"}", "{\"comment\": \"We greatly appreciate your positive comments. Here is a point-by-point response to your concerns.\\n\\n---\\n### Typos and minor issues:\\n\\n- **Comment:** \\n *L103: notation $[1:m]$ is not defined*\\n\\n **Response:** \\n Thank you for this comment. We have changed the notation to a more common notion $[m]$ and defined it at the beginning of our recap of full conformal.\\n\\n- **Comment:** \\n *L190: the objective function (is) often highly nonconvex*\\n\\n **Response:** \\n Thank you, done.\\n\\n---\\n\\n### Questions:\\n\\n- **Comment:** \\n *Do we require $\\\\mathcal{Y} \\\\subseteq \\\\mathbb{R}$? Do we require the marginal distribution $P_Y$ is continuous for the non-conformity scores to be uniformly distributed (L81)?*\\n\\n **Response:** \\n Thank you for this comment. Yes, we do require $\\\\mathcal{Y} \\\\subseteq \\\\mathbb{R}$ as we do not consider complex-valued responses. We omitted this notion because it is a common assumption in conformal prediction literature.\\n\\n Your question regarding the continuity of $P_Y$ is insightful and interesting. Inspecting our approach, we did not explicitly use the continuity assumption (in the development of our method, or in theoretical analysis). We noticed that we did state that we assume $P_{X,Y}$ is continuous. This is unnecessary and we have removed it from the revised version. That being said, it is true that some $P_{X,Y}$ and $f$ configuration may lead to large stability bounds, thus conservative prediction sets. Also, for simplicity, throughout the paper, we have been focusing on one specific choice of the non-conformity score, written in over-simplified notation, that $S = |y-f(x)|$. This might not be the choice of non-conformity scores for classification. But we feel that may go beyond the scope of this paper and did not discuss.\\n\\n In the conformal prediction literature, there is a classical technique for breaking ties by adding a small amount of artificial random noise (independent of anything else) to each non-conformity score, see Romano et al. (2020) \\\"Classification with Valid and Adaptive Coverage.\\\" Therefore, the (discrete) uniform distribution of the rank is not so much of a concern as it might seem. But it's a good question!\\n\\n\\n- **Comment:** \\n *Is $\\\\mathcal{D}\\\\_{\\\\text{test}}$ drawn from the same distribution $P_{X,Y}$ as $\\\\mathcal{D}$? Is it iid drawn? If yes, then is it equivalent to consider a single test example (e.g., Equation (1) only need to hold for one data point instead of for all $j \\\\in [m]$)? If not, then why all non-conformity scores are exchangeable?*\\n\\n **Response:** \\n Thank you for pointing this out. We have added a clarification that the test data (including the unobserved responses) are also i.i.d. from the same distribution $P_{X,Y}$.\\n\\n- **Comment:** \\n *Does this hold for any $\\\\alpha$ in $[0,1]$? How do we obtain this from the fact that the rank is uniformly distributed over $\\\\\\\\{1,\\\\dots,n+1\\\\\\\\}$?*\\n\\n **Response:** \\n Yes, the result holds for any $\\\\alpha \\\\in [0,1]$. To elaborate, the key is that the rank of the test score among the $n+1$ non-conformity scores is uniformly distributed over the discrete set $\\\\\\\\{1, \\\\dots, n+1\\\\\\\\}$. This means that the rank has an equal probability of occupying any of these $n+1$ positions, with probability $\\\\frac{1}{n+1}$ for each. Now, for the test score to fall within the $(1-\\\\alpha)$-quantile, its rank must be less than or equal to $(1-\\\\alpha)(n+1)$. Since the rank is discrete, this corresponds to the smallest integer greater than or equal to $(1-\\\\alpha)(n+1)$, which is $\\\\lceil(1-\\\\alpha)(n+1)\\\\rceil$. Summing the probabilities up to this rank gives the coverage probability, $\\\\frac{\\\\lceil(1-\\\\alpha)(n+1)\\\\rceil}{n+1} \\\\geq 1-\\\\alpha$.\\n\\n\\n- **Comment:** \\n *$\\\\mathcal{Q}\\\\_{1-\\\\alpha}$ is not defined. Is it lower quantile function $\\\\mathcal{Q}_{p}:= \\\\mathrm{inf} \\\\\\\\{x: F(x) \\\\geq p\\\\\\\\}$?*\\n\\n **Response:** \\n Thank you for pointing this out. We have added the definition of $\\\\mathcal{Q}$ immediately following its first appearance.\\n\\n\\n- **Comment:** \\n *In (2), shouldn't $1-\\\\alpha$ be slightly increased to $(1-\\\\alpha)$-quantile is $\\\\frac{\\\\lceil(1-\\\\alpha)(n+1)\\\\rceil}{n}$ for this to hold in finite sample?*\\n\\n **Response:** \\n Thank you for the insightful comment. This is a bit subtle\\u2014let us explain. There is a difference between Equation (19) in the material you shared and Equation (2) in our manuscript. Equation (19) computes the quantile based only on $n$ data points, excluding the test example, while our Equation (2) includes $\\\\infty$. In the former, a correction is needed because one of the original $n+1$ data points is excluded, while in our formulation, the inclusion of $\\\\infty$ allows us to compute the quantile directly over $n+1$ points, eliminating the need for such a correction.\\n\\n---\\n\\n### References:\\n\\n[1] Yaniv Romano, Evan Patterson, and Emmanuel J. Cand\\u00e8s. Classification with valid and adaptive coverage. *Advances in Neural Information Processing Systems*, 33:3581\\u20133591, 2020.\"}", "{\"comment\": \"## Other questions and minor suggestions\\n\\n- **Line 34:** *I think $Y_{n+j}$ should not be included in $\\\\mathcal{D}_{\\\\text{test}}$.* \\n Thank you. In the revision, we have emphasized that $Y_{n+j}$ is unobserved and now write \\\"$Y_{n+j}=?$\\\" instead of just $Y_{n+j}$ in $\\\\mathcal{D}_{\\\\text{test}}$.\\n\\n- **Line 58:** *Consider using \\\"RO-StabCP\\\" for clarity instead of \\\"in Ndiaye (2022).\\\"* \\n Thank you, done.\\n\\n- **Line 70:** *The phrase \\\"guess $Y_{n+j}$ with '$y$'\\\" may not read clearly.* \\n Sure, we have replaced it with \\\"*let $y$ denote a guessed value of the unobserved $Y\\\\_{n+j}$.*\\\"\\n\\n- **Line 75:** *Specifying the range, e.g., by \\\"...swapped for $i,i'\\\\in[n]\\\\cup \\\\{n+j\\\\}$\\\" would be clearer.* \\n Thank you, done.\\n\\n- **Line 90:** *Typo: $y$ should be replaced by $i$.* \\n Thank you, done.\\n\\n- **Line 103 (Definition 1):** *(1) Define $[1:m]$ notation; (2) clarify the quantifier regarding $\\\\mathcal{D}$.* \\n Thank you for the suggestion. \\n (1) We have switched to the notation \\\"$[m]$\\\" as you suggested (globally replaced all). \\n (2) In the revised Definition 1, we have clarified that \\\"*$\\\\hat{f}\\\\_j^{\\\\mathfrak{y}}$ is trained on $\\\\mathcal{D}\\\\cup \\\\\\\\{(X_{n+j}, \\\\mathfrak{y})\\\\\\\\}$, for $\\\\mathfrak{y}=y$ or $\\\\tilde{y}$.*\\\"\\n\\n- **Line 108:** *\\\"Recall\\\" may be clearer than \\\"Let.\\\"*\\n\\n Thank you, done.\\n\\n- **Line 140:** *Consider adding remarks right after Definition 2 to discuss (1) the pursuit of adaptive parameters rather than uniform bounds to obtain sharper stability estimates, (2) the practicality of assuming known parameters, and (3) how these parameters impact the accuracy and robustness of prediction intervals. Although these points are addressed in later sections, readers would benefit from a brief mention here.*\\n\\n Thank you for this insightful suggestion. We have revised the paragraph immediately following Definition 2 to address your concerns. Please refer to the texts there highlighted in blue.\\n\\n- **Line 154:** *Using varied parenthesis sizes or brackets might improve readability.*\\n\\n Thank you. We have varied a pair of round parentheses to curly brackets. We also adjusted the sizes of some parentheses/brackets. Please let us know if the current version reads better.\\n\\n- **Line 202:** *Recall the meaning of the augmented data ${\\\\cal D}_j^y$ from Line 70 for context.*\\n\\n Thank you, done.\\n\\n- **Line 367:** *The statement \\\"This leads to wider prediction intervals for all methods, and particularly for SplitCP, more variability in prediction interval length\\\" is not clear to me. It suggests the prediction interval of SplitCP becomes particularly wider due to the increase $m=1\\\\to m=100$. However, SplitCP already appears to vary similarly at both $m=1$ and $m=100$, while the other methods vary more at $m=100$.*\\n\\n Thank you for pointing this out. You're right that this sentence shouldn't be comparing $m=1$ versus $m=100$. In the revision, we have replaced this sentence with a statement that marginally compares SplitCP with other methods for $m=1$ and $m=100$, respectively.\\n\\n- **Line 369:** *While the authors note that derandomization (Gasparin \\\\& Ramdas, 2024) would incur extra computational costs, how does LOO-StabCP compare with derandomization in other aspects such as prediction accuracy, coverage, and stability?*\\n\\n Thank you for this suggestion. Reviewer Rbu1 also mentioned this point. In this revision, we added a new Appendix B to compare our method to derandomized approaches. The conclusion is that our method generally computes faster and suffers less conservatism.\\n\\n- **Line 465:** *The comment \\\"Compared to cfBH, our method is more powerful\\\" could benefit from further clarification. How is this conclusion drawn from Figure 3 and Table 3?* \\n\\n Thank you for your thoughtful question. First, we clarify that Table 3, redundant to Figure 3, has been moved to the appendix, and Figure 3 has been renumbered as Figure 4 in the revised version. In Figure 4, we compare LOO-cfBH and cfBH across three FDR levels ($q = 0.1, 0.2, 0.3$). At lower $q$ levels, LOO-cfBH achieves lower FDP and higher power than cfBH. This empirical finding indicates that LOO-cfBH is able to correctly reject a greater proportion of false null hypotheses with a smaller proportion of false rejections, highlighting its ability to perform more precise screening compared to cfBH. We hope this clarifies our findings.\"}", "{\"metareview\": \"Computing full conformal prediction can be challenging if applied to new test points because all the evaluations need to be repeated for each new test points. This paper leverage algorithmic stability bound that are independent to the left out points and derive faster algorithms.\\n\\nOne weakness I might foresee is that while improving computational time, this independence to the test point will back fire on the adaptability of the cp sets. For example, in heteroscedastic setting, one want to depends on the points. It should be nice if authors could add more experiments on these lines.\", \"additional_comments_on_reviewer_discussion\": [\"Reviewers generally acknowledge its practical contribution and solid theoretical foundations, emphasizing its advantages in handling multiple predictions efficiently while maintaining valid coverage. However, they raise concerns about its novelty and limited application examples. Specifically, while LOO-StabCP demonstrates significant computational savings compared to RO-StabCP, some reviewers feel the extension from replace-one to leave-one-out stability is straightforward and lacks substantial innovation.\", \"The authors responded by emphasizing the conceptual leap of leave-one-out stability as a non-obvious methodological innovation, challenging conventional conformal prediction doctrines. They expanded the paper with new applications to neural networks, kernel methods, and bagging, and demonstrated the method's versatility across broader scenarios, including non-convex settings. While the additional results and applications convinced some reviewers to accept or maintain positive ratings, one reviewer remained skeptical about the novelty but acknowledged the value of the added contributions. The overall sentiment reflects recognition of LOO-StabCP's practical relevance and computational efficiency, balanced against debates about its originality.\", \"Reviewer 3RnT: Positive overall, supports acceptance with minor suggestions for improvement.\", \"Reviewer Rbu1: Marginally positive, leans toward acceptance after the authors addressed concerns with expanded experiments and additional analyses.\", \"Reviewer LXUa: Strongly positive, advocates for acceptance following the revisions and enhanced experimental evaluations.\", \"Reviewer VLB9: Borderline, appreciates the revisions and additional results but maintains concerns regarding the novelty of the contribution.\"]}", "{\"title\": \"(Continued)\", \"comment\": \"### Remarks on Neural Networks\\n\\nFurthermore, we would like to add a few important remarks regarding the application of our method to DNN.\\n\\nThe main challenge in applying our method to DNN's is the convexity constraint in our Theorems 2 \\\\& 3.\\nDNN's are typically highly non-convex.\\nOur idea to address DNN is not to analyze the shape of the objective function at the eventually convergent parameter values, which is super-complicated.\\nInstead, we analyze the algorithm SGD as a popular optimizer for deep neural networks.\\nThis led to our new theoretical result, i.e., Theorem 4, which is proved in a different way than its convex counterpart (Theorem 3).\\n\\nNonetheless, the extension of SGD to non-convex functions is not a free lunch.\\nThe price is that the mathematically rigorous stability bound in Theorem 4 may be quite conservative.\\n\\nRecall that we have derived a stability bound in the original submission for SGD applicable to convex functions, which has a much tighter stability bound.\\nWe numerically tested its performance on neural networks, and it seemed to perform well.\\nSee Figure 5.\\nIt achieves desired coverage rate while showing competitive accuracy.\\nComputation-wise, it is much faster than RO-StabCP.\\n\\nTherefore, a very intriguing future work is to derive tighter stability bounds for non-convex SGD.\\nThe task seems challenging and may require significant additional endeavors.\"}", "{\"comment\": \"I confirm that I have read the response, which answered my questions. I will keep my score and tend to accept.\"}" ] }
BszvEXQyLM
Phase-Aware KANGaussian : Phase-Regularized 3D Gaussian Splatting with Kolmogorov-Arnold Network
[ "Li Hongguang", "Chaoyu Dong" ]
Vanilla 3D Gaussian Splatting struggles with modelling high frequency details, especially in unbounded scenes. Recent works such as Scaffold-GS and Spec-Gaussian have made tremendous improvements to the reconstruction quality of these high frequency details, specifically in synthetic and bounded scenes, but still struggle with unbounded real world scenes. Therefore, we propose Phase-Aware KANGaussian, a model building on these earlier contributions to produce state-of-the-art reconstruction quality for unbounded real world scenes with greatly improved high frequency details. Phase-Aware KANGaussian introduces a novel phase regularization method that optimizes models from low-to-high frequency, dramatically improving the quality of high frequency details. Phase-Aware KANGaussian is also one of the first few papers to integrate a Kolmogorov-Arnold Network (KAN) into the Gaussian Splatting rendering pipeline to verify its performance against the Multilayer Perceptron (MLP). All in all, Phase-Aware KANGaussian has three main contributions: (1) Introduce a Gaussian Splatting model with state-of-the-art performance in modelling real-world unbounded scenes with high frequency details, (2) a novel phase regularization technique to encode spatial representation and lastly, (3) first few to introduce a KAN into the Gaussian Splatting rendering pipeline.
[ "3D Gaussian Splatting", "Kolmogorov Arnold Network", "Phase Regularization", "Specular" ]
https://openreview.net/pdf?id=BszvEXQyLM
https://openreview.net/forum?id=BszvEXQyLM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wfj4fGK1rW", "piFaSk0beh", "mYrqmszNUX", "egIXXnWNzO", "djPvxt23Hh", "Zqr152jhXQ", "V9NjEdAl8l", "EppXm8raZv", "9N8XTK87c7", "5FRJZc5KVA" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733135625000, 1733133570441, 1730612451641, 1730556598723, 1733148773696, 1738411051398, 1729092321876, 1732588761113, 1729339287807, 1733132838791 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7201/Authors" ], [ "ICLR.cc/2025/Conference/Submission7201/Authors" ], [ "ICLR.cc/2025/Conference/Submission7201/Reviewer_HoeG" ], [ "ICLR.cc/2025/Conference/Submission7201/Reviewer_eqJD" ], [ "ICLR.cc/2025/Conference/Submission7201/Authors" ], [ "ICLR.cc/2025/Conference/Submission7201/Authors" ], [ "ICLR.cc/2025/Conference/Submission7201/Reviewer_qU3P" ], [ "ICLR.cc/2025/Conference/Submission7201/Authors" ], [ "ICLR.cc/2025/Conference/Submission7201/Reviewer_dgXq" ], [ "ICLR.cc/2025/Conference/Submission7201/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reply to Reviewer dgXq\", \"comment\": \"We thank the reviewer for their constructive feedback on our paper.\\n\\n> This paper lacks sufficiently novel methods. For example, Sections 3.2.1 and 3.2.2 are largely based on Spec-Gaussian (with the exception of differences in KAN and MLP). Section 3.2.3, on the other hand, is based on Fre-GS. The paper seems to be a KAN version that combines Spec-Gaussian with Fre-GS. I recommend that the authors move parts of these sections to the Preliminary section.\\n\\nWe understand the reviewer's concerns about our paper's novelty. However, the choice of KAN is intentional to exploit the locality property of the network to better model specular reflections that are highly view-dependent. There is great potential in KANs for this task as we demonstrate that using a KAN with 16x8 hidden layers can generate comparable metric scores to a MLP with 128x128x128 hidden layers (used in SpecGaussian) in our ablation study. Unfortunately, as KANs are new, the main bottleneck is the under-optimized implementation as the computation of the B-Splines are yet to be parallelized. We hope that our work can spark more interest in KANs, hopefully leading to better optimized libraries that can generate new groundbreaking results. \\n\\nWith regards to frequency regularisation, incorporating an expanding mask into the regularization process have been widely utilized in other applications of 3D computer vision, including in NeRFs (FreeNERF[R1]). However, as far as we are aware, the approach of only including the phase term in the regularization process is novel. The motivation behind excluding the amplitude term is that spatial information is mostly contained in the phase, so including (and optimizing for) the amplitude term might dilute the effectiveness of the regularization on the reconstruction. This is verified in the ablation study, where we compared the reconstruction quality when we included both terms against only including the phase term, and our method yielded better metric scores in general.\\n\\nLastly, with regards to the organization of the paper, we hope for the understanding of the reviewer that the sections have proper citations in their body paragraphs and they are crucial to the rendering pipeline of the method, so moving the suggested sections to the preliminary section might hinder readability and flow of the paper.\\n\\n[R1] Yang, J., Pavone, M., & Wang, Y. (2023). Freenerf: Improving few-shot neural rendering with free frequency regularization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8254-8263).\\n\\n> The authors dedicated a significant portion of the paper to explaining how KANGaussian theoretically offers a higher capacity for modeling high-frequency information. However, in the Experiments section, there are no examples that demonstrate improvements in modeling specular components; instead, the focus is on floater removal, as seen in Figure 7. It could be much better if the authors used scenes with specular highlights, such as the example shown in Figure 5, to substantiate their claims and prove the effectiveness of their method in improving specular modeling.\\n\\nWe thank the reviewer for the suggestion. We are currently working to include additional metric scores for specular synthetic dataset and would ask for the reviewer's patience. \\n\\n> Missing comparison of training time and rendering efficiency (FPS). There is still a computational speed difference between KAN and MLP. Although KANGaussian may not have an advantage in rendering speed, it could be much better to provide these details to give readers a clearer understanding of the strengths and weaknesses of the KAN-based approach.\\n\\nWe agree with the reviewer and have revised the paper to include the average slowdown in training time (4x slowdown vs SpecGaussian) and slowdown in rendering time (6x slowdown vs SpecGaussian). However, we also seek the reviewer's understanding in that KANs are new so the libraries are under optimized, thus the updating of parameters for KANs have yet to be parallelized. \\n\\n> Missing comparison of the number of Gaussians. The quantity of Gaussians has a significant impact on rendering metrics, and the authors need to provide a comparison of the actual number of Gaussians used in each method to ensure a fair comparison.\\n\\nThe number of gaussians remain unaffected with the introduction of KAN. However, there is a 2x increase in number of gaussians when frequency regularization is applied, as it encourages more gaussians to be produced in order to model high frequency details.\\n\\n> I'm very curious about the resolution the authors used for Mip-NeRF 360. Did they follow the Mip-360 setup with downsampling factors of 4 for outdoor scenes and 2 for indoor scenes, or did they adopt the 3DGS setting where the images are uniformly cropped to a width of 1600 pixels?\\n\\nWe utilized the Mip-360 setup as described in the question.\"}", "{\"title\": \"Response to Reviewer HoeG (Part 2)\", \"comment\": \"> The lack of evaluation on synthetic datasets. Through the authors' claim of their SOTA performance on real unbounded scenes, they should also validate the proposed method on synthetic shiny scenes as used in Spec-Gaussian [R2] and [R3].\\n\\nWe agree with the reviewer. At the time of writing the first version of our paper, the synthetic shiny dataset was not released by the authors of Spec-Gaussian. However, they have since made their dataset public, and we have been attempting to run our model on the shiny dataset in Spec-Gaussian[R2], but COLMAP fails on the dataset and hence we are unable to extract metric scores from the dataset. \\n\\n[R2] Yang, Ziyi, et al. \\\"Spec-gaussian: Anisotropic view-dependent appearance for 3d gaussian splatting.\\\" arXiv preprint arXiv:2402.15870 (2024). [R3] Ye, Keyang, Qiming Hou, and Kun Zhou. \\\"3d gaussian splatting with deferred reflection.\\\" ACM SIGGRAPH 2024 Conference Papers. 2024.\\n\\n> The presented experimental results are not fully convincing. For instance, in the overall comparison of real datasets (Table 1), the proposed method ranks second in PSNR, underperforming compared to Spec-Gaussian [R2]. Additionally, in the ablation studies, the \\u201cNo KAN\\u201d variant surprisingly outperforms the proposed method on Mip-NeRF 360 and Tanks&Temples in SSIM and LPIPS. Given SSIM and LPIPS\\u2019 importance in assessing texture detail, more thorough explanations and additional experiments are needed to validate the effectiveness of each module.\\n\\nWe understand the concerns of the reviewer, and agree with the analysis about the importance of SSIM and LPIPS in assessing textural detail. For the \\\"No KAN' variant, we remove the specular highlights completely. Thus, the SSIM and LPIPS scores are unaffected by errors from modelling the specular highlights. As such, there might be minor discrepancies in SSIM and LPIPS scores, which is less significant in comparison to the difference in PSNR score.\\n\\n> A direct comparison between the Kolmogorov-Arnold Network (KAN) and the MLP used in Spec-Gaussian [R2] is missing. Since the experimental results do not consistently surpass Spec-Gaussian on PSNR (Table 1), further evidence is needed to substantiate the choice of KAN over MLP.\\n\\nWe appreciate the reviewer's feedback. We have added an additional Table 3 in the ablation study to provide the direct comparison. Additionally, we hope for the reviewer's understanding in that the KAN network used is less complex than the MLP used in SpecGaussian, which could possibly affect the discrepancy in metric scores.\\n\\n> The hyperparameters are not provided, e.g., the scalar terms of production of scale and phase regularization (Equation 19) used in experiments. Besides, an analysis or explanation of hyperparameter choice is better to be provided.\\n\\nFor Phase-Aware KANGaussian, through hyperparamter tuning, we found that choosing the scalar terms such that the regularization loss accounts for around 20% of the total loss (~10^-5) yielded the best results for the the three datasets we experimented on, as the regularization would perform more of a supportive function rather than overwhelm the loss function. However, the optimal hyperparameter is closely related to the specific dataset as the distribution of phase signals in each dataset might be different.\\n\\n> Certain aspects of the writing lack clarity and structural coherence, hindering readability and comprehension of the paper\\u2019s innovations.\\n\\nWe thank the reviewer for their comments and have revised the paper to hopefully improve the readability and comprehension.\"}", "{\"summary\": \"This paper presents a 3DGS method Phase-Aware KANGaussian method, which aims to enhance 3D reconstruction quality, particularly for capturing high-frequency details in unbounded real-world scenes. The authors propose a novel phase regularization technique that progressively optimizes model training across frequencies from low to high. Additionally, they integrate the Kolmogorov-Arnold Network (KAN) into anisotropic color modeling.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The introduction of KAN for modeling anisotropic color is novel. It might be theoretically better than MLP, as KAN exhibits a locality property due to its B-Splines as claimed in Line 245-246 and Figure 4.\", \"weaknesses\": \"## Major Concerns:\\n1. The novelty of phase regularization is questionable. FreGS [R1] has already provided frequency regularization on both amplitudes and phase parts. Equation 7 in this paper is similar to Equation 6 in FreGS. Besides, this paper introduces frequency filtering by expanding the frequency band, which is similar to the frequency annealing proposed in FreGS. For example, Equation 17 in this paper is similar to Equation 13 in FreGS. Please clarify the difference from FreGS, particularly for the above two aspects. Also, a comprehensive experimental comparison to validate the superior advantage of the proposed method is necessary.\\n\\n[R1] Zhang, Jiahui, et al. \\\"Fregs: 3d gaussian splatting with progressive frequency regularization.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n2. The lack of evaluation on synthetic datasets. Through the authors' claim of their SOTA performance on real unbounded scenes, they should also validate the proposed method on synthetic shiny scenes as used in Spec-Gaussian [R2] and [R3].\\n\\n[R2] Yang, Ziyi, et al. \\\"Spec-gaussian: Anisotropic view-dependent appearance for 3d gaussian splatting.\\\" arXiv preprint arXiv:2402.15870 (2024).\\n[R3] Ye, Keyang, Qiming Hou, and Kun Zhou. \\\"3d gaussian splatting with deferred reflection.\\\" ACM SIGGRAPH 2024 Conference Papers. 2024.\\n\\n3. The experimental results presented in Table 1 for comparisons with Scaffold-GS [R4] raise some concerns. Specifically, the performance of Scaffold-GS on Mip-NeRF 360 is notably lower than the values reported in the original paper, whereas results on the other two datasets are the same. The authors do not clarify whether these results are based on retrained models or reporting values from the original work. Thus, the validity of the conclusions drawn from these comparisons is unclear. Please provide a more detailed description of the experimental setup and an explanation for the observed discrepancies in these results.\\n\\n[R4] Lu, Tao, et al. \\u201cScaffold-GS: Structured 3d gaussians for view-adaptive rendering.\\u201d Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n4. The presented experimental results are not fully convincing. For instance, in the overall comparison of real datasets (Table 1), the proposed method ranks second in PSNR, underperforming compared to Spec-Gaussian [R2]. Additionally, in the ablation studies, the \\u201cNo KAN\\u201d variant surprisingly outperforms the proposed method on Mip-NeRF 360 and Tanks&Temples in SSIM and LPIPS. Given SSIM and LPIPS\\u2019 importance in assessing texture detail, more thorough explanations and additional experiments are needed to validate the effectiveness of each module.\\n\\n5. A direct comparison between the Kolmogorov-Arnold Network (KAN) and the MLP used in Spec-Gaussian [R2] is missing. Since the experimental results do not consistently surpass Spec-Gaussian on PSNR (Table 1), further evidence is needed to substantiate the choice of KAN over MLP.\\n \\n6. The hyperparameters are not provided, e.g., the scalar terms of production of scale and phase regularization (Equation 19) used in experiments. Besides, an analysis or explanation of hyperparameter choice is better to be provided.\\n\\n7. Certain aspects of the writing lack clarity and structural coherence, hindering readability and comprehension of the paper\\u2019s innovations. Here are some examples:\\n* Lines 253-254: Potential confusion between \\u201cspherical Gaussians\\u201d and \\u201cspherical harmonics.\\u201d\\n* Lines 259-260: Grammar issues in explaining the smooth and exponential terms.\\n* Line 402: Mislabeling \\u201c\\\\lambda_{prod}\\u201d as \\u201c\\\\lambda_{}\\u201d.\\n\\n\\n## Suggestions\\n1. A visualization of the ablation studies would offer clearer insights into the contributions of each component.\\n \\n2. A more detailed comparison of KAN and MLP is recommended, in addition to the accuracy, evaluations of time efficiency and resource consumption are also required.\\n\\n\\nIn summary, while this paper incorporates KAN into the 3DGS framework, the highlighted weaknesses, such as less novelty, lack of comparison on shiny datasets, unclear comparisons, suboptimal experimental results, insufficient justification of KAN, and limited validation, need to be addressed.\", \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This study investigates integrating KAN into the 3DGS framework to enhance rendering quality. By replacing MLP with KAN in Neural-Appearance GS techniques like Scaffold-GS, the authors achieve improved visual outcomes. Phase regularization is applied to further refine the visuals, leading to satisfactory results. However, the approach is somewhat limited, as it combines KAN and GS directly on color prediction, serving as a continuation of Scaffold-GS and Spec-GS.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper excels in presenting a novel integration of KAN within the 3DGS framework, leading to significant improvements in rendering quality. The authors effectively demonstrate how replacing MLP with KAN in established methods like Spec-Gaussian and Fre-GS enhances visual outcomes. This innovative approach not only improves the clarity and detail of rendered images but also introduces phase regularization to refine the results further.\\n\\n2. The paper is well-organized and clearly written, making complex concepts accessible to readers. The authors provide comprehensive ablation studies and figure illustrations that thoroughly support their claims, showcasing the superiority of KANGaussian over traditional methods in real-world scenarios. The method's potential to handle high-frequency details and improve visual fidelity is well-articulated, backed by detailed experimental results that highlight its practical applicability and robustness.\", \"weaknesses\": \"1. There is no comparison of training time and rendering speed. One of Gaussian's greatest advantages is its fast rendering and minimal training time. Including quantitative measurements of training and inference time would clarify KAN's impact on GS.\\n\\n2. As mentioned in the summary, I find the direct combination of KAN and GS in the well-explored area of neural GS appearance to be somewhat trivial. However, I believe that experimenting with novel technique combinations and sharing results benefits the community. I encourage such efforts, especially when the technique is straightforward. The results, though, are not significantly superior to other methods.\", \"questions\": \"I hope the author reports the training and inference speeds, as these are two of GS's greatest advantages and are of significant interest to readers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer qU3P\", \"comment\": \"We thank the reviewer for the suggestions.\\n\\n> The figure on Page 9 is too small. I have to zoom in \\\"300%\\\" to see it.\\n\\nWe have revised the figure and moved some visualization samples into the supplementary part of the document.\\n\\n> Please check whether the citation format is suitable for ICLR 2025. Sometimes the names of people are mixed with the sentences of the article, making it confusing. For example, \\\"we employ Kolmogorov Arnold Networks (KANs) in the rendering pipeline in contrast to earlier works Lu et al. (2023)\\\" -> \\\"we employ Kolmogorov Arnold Networks (KANs) in the rendering pipeline in contrast to earlier works (Lu & Yu, 2023)\\\"\\n\\nWe thank the reviewer for pointing the error in citation style out. We have revised the citation style accordingly.\\n\\n> Could you provide insights into the computational demands of your model, particularly regarding the use of KAN? (how much slower?)\\n\\nWe have revised the paper to more clearly state the average slowdown of our model. Specifically, we see a 4x slowdown in training and 6x slowdown in rendering when compared against SpecGaussian. However, we hope for the reviewer's understanding that the KAN library is under optimized and has yet to be parallelized, leading to the discrepancy in computation time.\\n\\n> Could you elaborate on the potential causes for the observed decrease in PSNR?\\n\\nThe KAN we have utilized in our model is only a 8x8 hidden layer KAN, which is less complex than the 128x128x128 MLP used in SpecGaussian. As such, the difference in PSNR is likely due to the model complexity, and we have further conducted an ablation study to verify this claim. We removed the frequency regularization component to free up CUDA memory to train a slightly more complex 16x8 hidden layer KAN model, and the results are tabulated below. We can observe a general improvement in metric scores for the 16x8 KAN model, highlighting the potential of KANs limited by implementation.\", \"note\": \"The models tabulated below do **not** have frequency regularization for fair comparison.\\n|Model|Mip360 PSNR|Mip360 SSIM|Mip360 LPIPS|T&T PSNR|T&T SSIM|T&T LPIPS|DB PSNR|DB SSIM|DB LPIPS\\n|-|-|-|-|-|-|-|-|-|-|\\n|128x128x128 MLP (SpecGaussian)|28.00|0.819|0.205|24.54|0.857|0.175|**30.34**|0.909|0.253\\n|8x8 KAN|27.95|0.820|0.193|24.45|**0.863**|**0.159**|30.30|**0.910**|**0.239**\\n|16x8 KAN|**28.01**|**0.822**|**0.191**|**24.57**|0.862|0.161|30.29|**0.910**|0.240\\n\\n> There is an error in 3.2.4. \\\"and \\u03bb\\u25a1 are scalar values to adjust...\\\"\\n\\nWe apologize for the confusion, and have revised the paper to improve readability.\\n\\n> The ablation study is confusing. e.g. Does \\\"Phase Regularization (Ours)\\\" contain Kan or does it only contain Phase Regularization? If does not, then which one is \\u201cNo Kan\\u201d supposed to be compared to? Please provide a clear description of each ablation condition, including which components are present or absent in each case.\\n\\nWe apologize for the confusion again. We have revised the paper accordingly to be clearer in the ablation condition.\\n\\n> The baseline (Spec-Gaussian (Yang, 2024)) of this article is not a peer-reviewed article, if there are peer-reviewed alternatives that could serve as additional comparisons?\\n\\nThe Spec-Gaussian paper has since been peer-reviewed and accepted into NeurIPS after the submission of our paper.\\n\\n> How does the model perform under varied lighting conditions and metal areas, especially given its focus on high-frequency details which can be highly sensitive to such changes?\\n\\nWe thank the reviewer for the suggestion. We have attempted to run the model on the synthetic dataset of SpecGaussian[R2] that was made public after the submission of our paper, in order to simulate the varied lighting conditions and metallic reflection. Unfortunately, COLMAP fails on the dataset and we apologize for not being able to produce metric scores within the timespan of the rebuttal phase, due to the lengthy training process required for the KAN network.\\n\\n[R2] Yang, Ziyi, et al. \\\"Spec-gaussian: Anisotropic view-dependent appearance for 3d gaussian splatting.\\\" arXiv preprint arXiv:2402.15870 (2024).\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper presents \\\"Phase-Aware KANGaussian,\\\" a 3D reconstruction model that enhances the detail and quality of unbounded real-world scenes, particularly in high-frequency details. Its contributions can be summarised as:\\n1. Integrated 3D Gaussian Splatting with a Kolmogorov-Arnold Network (KAN) in the rendering procedure for improving rendering quality.\\n2. A phase regularization technique aimed at optimizing models from low to high frequency to dramatically enhance high-frequency detail rendering, which involves filtering before computing a regularization term in the Fourier domain.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The use of Kolmogorov-Arnold Networks in the 3DGS rendering pipeline is innovative, and the authors are the first few who are doing this.\\n2. The phase regularization approach for controlling frequency details during training could lead to more precise control over detail rendering in complex scenes.\\n3. The derivation of the formulas and the figures in the front part are used appropriately and clearly.\\n4. The problem is well stated.\", \"weaknesses\": \"1. The motivation for integrating KAN into 3DGS is not clear. How is the locality property of KANs expected to benefit the modeling of specular highlights and other high-frequency details?\\n2. The author put too much content in PRELIMINARIES and it feels like this article was cobbled together. Can you reorganize the preliminaries section to focus more tightly on the concepts most crucial to understanding the novel contributions?\\n3. There is a potential risk of overfitting to high-frequency details at the expense of overall scene fidelity, as indicated by the slightly lower PSNR scores compared to Spec-Gaussians.\\n4. The baseline (Spec-Gaussian (Yang, 2024)) of this article is not a peer-reviewed article, if there are peer-reviewed alternatives that could serve as additional comparisons?\\n5. The ablation study is confusing. e.g. Does \\\"Phase Regularization (Ours)\\\" contain Kan or does it only contain Phase Regularization? If does not, then which one is \\u201cNo Kan\\u201d supposed to be compared to? Please provide a clear description of each ablation condition, including which components are present or absent in each case.\", \"questions\": \"1. The figure on Page 9 is too small. I have to zoom in \\\"300%\\\" to see it.\\n2. Please check whether the citation format is suitable for ICLR 2025. Sometimes the names of people are mixed with the sentences of the article, making it confusing. For example, \\\"we employ Kolmogorov Arnold Networks (KANs) in the rendering pipeline in contrast to earlier works Lu et al. (2023)\\\" -> \\\"we employ Kolmogorov Arnold Networks (KANs) in the rendering pipeline in contrast to earlier works (Lu & Yu, 2023)\\\"\\n3. Could you have some visual results for the ablation study? For example, some of your model's visual results remove some of your components.\\n4. Could you provide insights into the computational demands of your model, particularly regarding the use of KAN? (how much slower?)\\n5. How does the model perform under varied lighting conditions and metal areas, especially given its focus on high-frequency details which can be highly sensitive to such changes?\\n6. Could you elaborate on the potential causes for the observed decrease in PSNR?\\n7. There is an error in 3.2.4. \\\"and \\u03bb\\u25a1 are scalar values to adjust...\\\"\\n8. What are the differences and advantages of your model over Mip-Splatting[1], which also focuses on high frequency?\\n\\n[1] Yu, Z., Chen, A., Huang, B., Sattler, T. and Geiger, A., 2024. Mip-splatting: Alias-free 3d gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 19447-19456).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer HoeG (Part 1)\", \"comment\": \"We appreciate and thank the reviewer for their constructive feedback. We kindly request for the reviewer's patience as we are running experiments to answer the reviewer's queries, and revising the paper with the given feedback.\\n\\n>The novelty of phase regularization is questionable. FreGS [R1] has already provided frequency regularization on both amplitudes and phase parts. Equation 7 in this paper is similar to Equation 6 in FreGS. Besides, this paper introduces frequency filtering by expanding the frequency band, which is similar to the frequency annealing proposed in FreGS. For example, Equation 17 in this paper is similar to Equation 13 in FreGS. Please clarify the difference from FreGS, particularly for the above two aspects. Also, a comprehensive experimental comparison to validate the superior advantage of the proposed method is necessary.\\n\\nFrequency regularization with expanding masks have been widely utilized in other applications of 3D computer vision, including in NeRFs (FreeNERF[R1]). However, as far as we are aware, the approach of only including the phase term in the regularization process is novel. The motivation behind excluding the amplitude term is that spatial information is mostly contained in the phase, so including (and optimizing for) the amplitude term might dilute the effectiveness of the regularization on the reconstruction. This is verified in the ablation study, where we compared the reconstruction quality when we included both terms against only including the phase term, and our method yielded better metric scores in general.\\n\\nWith regards to experimental comparison, we are unable to compare our proposed method against FreGS\\u2019 method directly as the separation of low and high frequency regularization losses is incompatible with the training pipeline of the Neural Gaussian Splatting approach. Specifically, the neural approach involves finding the subset of neural gaussians that need to be trained for a specific training image, but the low frequency components often do not contain sufficient spatial information to identify the subset (leading to an empty set of neural gaussians identified), which crashes the code during the backward pass. Therefore, our next best alternative was to use only one frequency loss term with both amplitude and phase (akin to setting the same weights for both frequency regularization terms for FreGS) and compare the reconstruction against omitting the amplitude term, which were exactly the models used in the ablation study.\\n\\n[R1] Yang, J., Pavone, M., & Wang, Y. (2023). Freenerf: Improving few-shot neural rendering with free frequency regularization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8254-8263).\\n\\n> A direct comparison between the Kolmogorov-Arnold Network (KAN) and the MLP used in Spec-Gaussian [R2] is missing. Since the experimental results do not consistently surpass Spec-Gaussian on PSNR (Table 1), further evidence is needed to substantiate the choice of KAN over MLP.\\n\\n As KANs are new, their libraries are under optimized compared to MLP libraries. As such, we were only able to fit a relatively shallow KAN model with 8x8 hidden layers (vs SpecGaussian\\u2019s 128x128x128 hidden layers) with our frequency regularization as any larger would run out of CUDA memory. However, by excluding the frequency regularization entirely to save CUDA memory, we were able to fit a more complex 16x8 KAN model, and the performance metrics are summarized in the table below. We observe that we are indeed able to extract better metric scores with a more complex KAN, indicating untapped potential limited by implementation. Thus, we hope that our work can inspire more researchers to incorporate KANs into their work, which would motivate better optimized KAN libraries to facilitate new breakthroughs.\", \"note\": \"The models tabulated below do **not** have frequency regularization for fair comparison.\\n|Model|Mip360 PSNR|Mip360 SSIM|Mip360 LPIPS|T&T PSNR|T&T SSIM|T&T LPIPS|DB PSNR|DB SSIM|DB LPIPS\\n|-|-|-|-|-|-|-|-|-|-|\\n|128x128x128 MLP (SpecGaussian)|28.00|0.819|0.205|24.54|0.857|0.175|**30.34**|0.909|0.253\\n|8x8 KAN|27.95|0.820|0.193|24.45|**0.863**|**0.159**|30.30|**0.910**|**0.239**\\n|16x8 KAN|**28.01**|**0.822**|**0.191**|**24.57**|0.862|0.161|30.29|**0.910**|0.240\"}", "{\"summary\": \"This paper aims to apply KAN within the 3DGS framework to achieve higher-quality rendering. The authors primarily build upon Spec-Gaussian, Scaffold-GS, and Fre-GS, replacing the MLP with KAN, which results in improved rendering quality. Additionally, phase regularization is introduced to further enhance visual results. The experimental results show that KANGaussian achieves impressive results in real-world scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is well-written and easy to understand.\", \"The use of KAN in Gaussian splatting is novel.\", \"The impressive visual results and ablation studies are appreciated.\"], \"weaknesses\": \"1. This paper lacks sufficiently novel methods. For example, Sections 3.2.1 and 3.2.2 are largely based on Spec-Gaussian (with the exception of differences in KAN and MLP). Section 3.2.3, on the other hand, is based on Fre-GS. The paper seems to be a KAN version that combines Spec-Gaussian with Fre-GS. I recommend that the authors move parts of these sections to the Preliminary section.\\n2. The authors dedicated a significant portion of the paper to explaining how KANGaussian theoretically offers a higher capacity for modeling high-frequency information. However, in the Experiments section, there are no examples that demonstrate improvements in modeling specular components; instead, the focus is on floater removal, as seen in Figure 7. It could be much better if the authors used scenes with specular highlights, such as the example shown in Figure 5, to substantiate their claims and prove the effectiveness of their method in improving specular modeling.\\n3. Missing comparison of training time and rendering efficiency (FPS). There is still a computational speed difference between KAN and MLP. Although KANGaussian may not have an advantage in rendering speed, it could be much better to provide these details to give readers a clearer understanding of the strengths and weaknesses of the KAN-based approach. \\n4. Missing comparison of the number of Gaussians. The quantity of Gaussians has a significant impact on rendering metrics, and the authors need to provide a comparison of the actual number of Gaussians used in each method to ensure a fair comparison.\", \"questions\": \"I'm very curious about the resolution the authors used for Mip-NeRF 360. Did they follow the Mip-360 setup with downsampling factors of 4 for outdoor scenes and 2 for indoor scenes, or did they adopt the 3DGS setting where the images are uniformly cropped to a width of 1600 pixels?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer eqJD\", \"comment\": \"We appreciate the reviewer's constructive feedback.\\n\\n> There is no comparison of training time and rendering speed. One of Gaussian's greatest advantages is its fast rendering and minimal training time. Including quantitative measurements of training and inference time would clarify KAN's impact on GS.\\n\\nWe acknowledge the reviewer's concerns and have included in the revised paper the average slowdown in training (4x slower vs SpecGaussian) and inference time (6x slower vs SpecGaussian), as well as explanations behind the slowdown, primarily being that the KAN library is relatively new and computation of the basis functions for KAN has yet to be parallelised.\"}" ] }